Selfishness is by definition about having little regard for others. I don’t really see the connection; how is refraining from procreation a disregard for others? If anything, choosing not to have kids is selfless, as it is being unconcerned with propagating my genetic lineage. The nice thing about sentience is that we’re not beholden to our biology. In most cases, we consider it a good thing to rise above being driven by instinct.
Hackworth
- 0 Posts
- 166 Comments
I just don’t want kids.
I am also arcane as fuck, cause I shop at Horno’s Magic Item Hole.
Hackworth@piefed.cato
Technology@lemmy.world•Google sues web scraper for sucking up search results ‘at an astonishing scale’English
38·5 days ago<Three Spider-Men Point>
If it’s been with us all through history, I guess the further back you go, the more likely it is that you will have seen someone be crushed by the hand. And global travel? That’s the foundation for a persistent global mythology that’d probably dominate our storytelling for a while. I don’t know what all happens between then and the point where we have satellites that track the hand all the time, but I bet there’s potential for a novel. Or an anime. Or at least a comic.
Hackworth@piefed.cato
Fuck AI@lemmy.world•AI data centers used a world’s supply of bottled water in just one yearEnglish
82·5 days agoData centers can also use closed-loop cooling, air cooling, immersion cooling, etc; they’re just using potable water because it is the cheapest (for them). But even if they didn’t innovate at all, the high end of that estimate is like 0.02% of yearly global freshwater withdrawals. As you say, the devastating part is that location constraints determine who bears the externalities.
Hackworth@piefed.cato
News@lemmy.world•ICE Contracts Company Making Bounty Hunter AI AgentsEnglish
3·5 days agoHah, yeah, I also noticed the contrast. If I were cutting a video about this, I’d definitely use the first bit and not the second. I think there’s a kernel of truth in AI accelerating science. But the the weird hierarchy with Physics as the top is just the common misunderstanding about the nature of emergent properties mixed with some old-fashioned elitism. And we’re way closer to the AI surveillance state than we are to automated AI research laboratories.
Hackworth@piefed.cato
Games@lemmy.world•Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI'English
2·5 days agoAs I understand it, CLIP (and other text encoders in diffusion models) aren’t trained like LLMs, exactly. They’re trained on image/text pairing, which ya get from the metadata creators upload with their photos in Adobe Stock. Open AI trained CLIP with alt text on scraped images, but I assume Adobe would want to train their own text encoder on the more extensive tags on the stock images its already using.
All that said, Adobe hasn’t published their entire architecture. And there were some reports during the training of Firefly 1 back in '22 that they weren’t filtering out AI-generated images in the training set. At the time, those made up ~5% of the full stock library. Currently, AI images make up about half of Adobe Stock, though filtering them out seems to work well. We don’t know if they were included in later versions of Firefly. There’s an incentive for Adobe to filter them out, since AI trained on AI tends to lose its tails (the ability to handle edge cases well), and that would be pretty devastating for something like generative fill.
I figure we want to encourage companies to do better, whatever that looks like. For a monopolistic giant like Adobe, they seem to have at least done better. And at some point, they have to rely on the artists uploading stock photos to be honest. Not just about AI, but about release forms, photo shoot working conditions, local laws being followed while shooting, etc. They do have some incentive to be honest, since Adobe pays them, but I don’t doubt there are issues there too.
Hackworth@piefed.cato
Technology@beehaw.org•Journalists convinced a AI Vending Machine Things to give them free stuff like a PS5English
12·6 days agoHere’s the 60 Minutes piece and Anthropic’s June article about the one in their own office.
Claudius was cajoled via Slack messages into providing numerous discount codes and let many other people reduce their quoted prices ex post based on those discounts. It even gave away some items, ranging from a bag of chips to a tungsten cube, for free.
Their article on this trial has some more details too.
Hackworth@piefed.cato
Technology@beehaw.org•Journalists convinced a AI Vending Machine Things to give them free stuff like a PS5English
221·6 days agoThat was all part of the idea, though, because Anthropic had designed this test as a stress test to begin with. Previous runs in their own office had indicated similar concerns.
Hackworth@piefed.cato
Television@piefed.social•Paramount+ Set to Raise Prices, Phase Out Free Trials In the New YearEnglish
10·6 days agoI’m now out of streaming services. I guess back to video games.
Hackworth@piefed.cato
Ask Lemmy@lemmy.world•I'm not an artist. If I sketch something, and have an ai upscale it for me, is it still slop, vs just asking it to do everything?English
151·6 days agoNot all models are trained in the same way. Adobe Firefly was trained only with images from Adobe Stock, for instance.
Hackworth@piefed.cato
Games@lemmy.world•Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI'English
1·6 days agoThe Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren’t involved in either process - rather the models learn image-text relationships from meta tags. I believe there are some ChatGPT integrations with Reader and Acrobat, but that’s unrelated to Firefly.
Hackworth@piefed.cato
Technology@lemmy.world•New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed English
92·7 days agoHere’s a metaphor/framework I’ve found useful but am trying to refine, so feedback welcome.
Visualize the deforming rubber sheet model commonly used to depict masses distorting spacetime. Your goal is to roll a ball onto the sheet from one side such that it rolls into a stable or slowly decaying orbit of a specific mass. You begin aiming for a mass on the outer perimeter of the sheet. But with each roll, you must aim for a mass further toward the center. The longer you roll, the more masses sit between you and your goal, to be rolled past or slingshot-ed around. As soon as you fail to hit a goal, you lose. But you can continue to play indefinitely.
The model’s latent space is the sheet. The way the prompt is worded is your aiming/rolling of the ball. The response is the path the ball takes. And the good (useful, correct, original, whatever your goal was) response/inference is the path that becomes an orbit of the mass you’re aiming for. As the context window grows, the path becomes more constrained, and there are more pitfalls the model can fall into. Until you lose, there’s a phase transition, and the model starts going way off the rails. This phase transition was formalized mathematically in this paper from August.
The masses are attractors that have been studied at different levels of abstraction. And the metaphor/framework seems to work at different levels as well, as if the deformed rubber sheet is a fractal with self-similarity across scale.
One level up: the sheet becomes the trained alignment, the masses become potential roles the LLM can play, and the rolled ball is the RLHF or fine-tuning. So we see the same kind of phase transition in prompting (from useful to hallucinatory), in pre-training (poisoned training data), and in post-training (switching roles/alignments).
Two levels down: the sheet becomes the neuron architecture, the masses become potential next words, and the rolled ball is the transformer process.
In reality, the rubber sheet has like 40,000 dimensions, and I’m sure a ton is lost in the reduction.







I do get what you’re saying. And I should say I have enormous respect for people who choose to have children and follow through with raising them. But it’s a type of selfishness to want to extend one’s particular family line rather than choosing to adopt one of the many children who currently need a parental figure in their lives. If there were few of us and this was a matter of actual continuation of the species, the common societal framing would make more sense. But as it is, the choice carries no real ethical/moral imperative other than the responsibility to potential offspring.
Doing something or not doing it because it’s the right thing for you is not inherently selfish. It’s exactly the freedom we all cherish. And I think the typical framing of selfish/selfless does more harm than good in this case. We may just disagree over semantics. But at least here, “selfish” carries negative connotations, as the disregard described in the definition must assume some entitlement that the group has over an individual’s decision. I’m mostly pushing back against the foundations of that entitlement. Thanks for engaging with this thoughtfully. I appreciate the discussion!