Sarcasm, eh? At least there's no way THAT could be taken the wrong way.
From 10/4/2007, 10:44:31 PM till now, @webmaven has achieved 9407 Karma Points with the contribution count of 4464.
Recent @webmaven Activity
Sarcasm, eh? At least there's no way THAT could be taken the wrong way.
Upcoming Project Policy Changes – LLVM Project
1 points • 0 comments
> This statement is demonstrably false. Yes, people intuitively apply game theory all the time.
People do intuitively apply game theory, but they do so in ways that aren't strictly rational.
Rationally, deterrence is a function of the penalty adjusted by the odds of being caught and convicted. which means that doubling the penalty should have the same additional deterring effect as doubling the odds of conviction. However, increasing penalties has a relatively small deterring effect, and increasing enforcement a relatively large one.
One way to explain this is to assume that getting caught and convicted has large fixed costs that are independent of the length of sentence.
This tends to cut against the dominant tough-on-crime narrative that focuses on increasing penalties, BTW, but for some reason hiring more judges, prosecutors, public defenders, detectives, CSIs and other people to solve cases and actually go to trial isn't as popular as just mandating longer sentences (and coercing defendants to agree to a plea deal).
Tracing Knowledge in Language Models Back to the Training Data
3 points • 0 comments
XMPP wasn't adopted by large organizations who valued interop and federation. They valued the implementations, but almost invariably used it to build a walled garden.
I'm not entirely certain why this was, but I suspect that valuing extensibility over interoperability (or the perception of it) was part of it. Optional extensions and interop with hosts that used a different set simply gave hosts/operators too much to think about in terms of deploying the service for users.
> It worked out well. We had no problems reading our code.
That's not the relevant test, even for fairly large teams. More important is whether someone from another team (or more critically, a brand new hire) can read and modify the code.
That said, based on what you're describing, folks from outside your team ought to get comfortable pretty quickly too .
The hype around DeepMind’s new AI model misses what’s cool about it
2 points • 1 comments
> their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.
There is a difference between probably and invariably. Would it be so hard for the model to show male nurses at least some of the time?
> I can understand why people wouldn’t want a tool they have created to be used to generate disturbing, offensive or disgusting imagery. But I don’t really see how doing that would be dangerous.
Propaganda can be extremely dangerous. Limiting or discouraging the use of powerful new tools for unsavory purposes such as creating deliberately biased depictions for propaganda purposes is only prudent. Ultimately it will probably require filtering of the prompts being used in much the same way that Google filters search queries.
It isn't wrong, but we aren't talking about the model somehow magically transcending the data it's seen. We're talking about making sure the data it sees is representative, so the results it outputs are as well.
Given that male nurses exist (and though less common, certainly aren't rare), why has the model apparently seen so few?
There actually is a fairly simple explanation: because the images it has seen labelled "nurse" are more likely from stock photography sites rather than photos of actual nurses, and stock photography is often stereotypical rather than typical.
At the very least we should expect that the results not be more biased than reality. Not all criminals are Black. Not all are men. Not all are poor. If the model (which is stochastic) only outputs poor Black men, rather than a distribution that is closer to reality, it is exhibiting bias and it is fair to ask why the data it picked that bias up from is not reflective of reality.
> If the input text lacks context/nuance, then the model must have some bias to infer the user's intent. This holds true for any image it generates; not just the politically sensitive ones. For example, if I ask for a picture of a person, and don't get one with pink hair, is that a shortcoming of the model?
You're ignoring that these models are stochastic. If I ask for a nurse and always get an image of a woman in scrubs, then yes, the model exhibits bias. If I get a male nurse half the time, we can say the model is unbiased WRT gender, at least. The same logic applies to CEOs always being old white men, criminals always being Black men, and so on. Stochastic models can output results that when aggregated exhibit a distribution from which we can infer bias or the lack thereof.
> We certainly don't want to perpetuate harmful stereotypes. But is it a flaw that the model encodes the world as it really is, statistically, rather than as we would like it to be? By this I mean that there are more light-skinned people in the west than dark, and there are more women nurses than men, which is reflected in the model's training data. If the model only generates images of female nurses, is that a problem to fix, or a correct assessment of the data?
If the model only generated images of female nurses, then it is not representative of the real world, because male nurses exist and they deserve to not be erased. The training data is the proximate causes here, but one wonders what process ended up distorting "most nurses are female" into "nearly all nurse photos are of female nurses" something amplified a real world imbalance into a dataset that exhibited more bias than the real world, and then training the AI bakes that bias into an algorithm (that may end up further reinforcing the bias in the real world depending on the use-cases).
> abortions the woman chose to take the action whose sole biological purpose is pregnancy.
If you think that, you really haven't been doing it right. Pleasure, companionship, comfort, reinforcement of social status, and many other purposes are commonly fulfilled by the act of sex, including between same sex partners.
But let us say that you're correct. That still leaves 1% of pregnancies that are involuntary. What should women be allowed to do then?
We aren't going to agree on when a fertilized egg becomes a person (I think you would say immediately, I would say only when viable outside the womb), but let's set that aside.
I don't think women should be forced to continue an unwanted pregnancy. You don't think destroying the fetus at any point is moral. What if we could both get what we want?:
https://www.nature.com/articles/ncomms15112
Please note that unless women are provided with comprehensive sex education that includes contraception, there are going to be a lot more unwanted babies being born, regardless of how they are brought to term. This imposes a lot of costs on society that I hope you have some sort of plan to cover.
Yes, but this links directly to the paper. Also, the story you linked to redirects to a different URL.
Hardly. Same incident, but different site and article author.
Plus, this one's headline is WAY awesomer.
Gut wrenching.
Friends don’t let friends train small diffusion models
90 points • 24 comments
> Of course once someone trains an AI with a robotic arm to do the actual painting, then your worry holds firm.
It's been done, starting from plotter based solutions years ago, through the work of folks like Thomas Lindemeier:
https://scholar.google.com/citations?user=5PpKJ7QAAAAJ&hl=en...
Up to and including actual painting robot arms that dip brushes in paint and apply strokes to canvas today:
https://www.theguardian.com/technology/2022/apr/04/mind-blow...
The painting technique isn't all that great yet for any of these artbots working in a physical medium, but that's largely a general lack of dexterity in manual tool use rather than an art specific challenge. I suspect that RL environments that physically model the application of paint with a brush would help advance the SOTA. It might be cheaper to model other mediums like pencil, charcoal, or even airbrushing first, before tackling more complex and dimensional mediums like oil paint or watercolor.
> The future is thin clients. It's not just going to be our industry, either. Every creative industry is going to undergo this change.
It is worth noting though that the thin clients are mostly going to be equivalent to a Chromebook, which is considerably beefier than top-of-the-line developer workstations from only a few years ago.
site design / logo © 2022 Box Piper