Which oligarch? I mean, yes there’s definitely a degree of trusting “the right sort” there, but capitalism isn’t a team sport and they’re not a team. Honestly one of them might launch skynet anyway, if that’s how the technology grows, but a few people are theoretically able to agree not to do something, while legions never can.
So do you think it should all be open sourced, then? And if so, are you a skeptic of “AI alignment”, or even “AI safety”?
Any of them. They don’t necessarily like each other or team up, but they are smart enough to understand that an upstart toppling one is a potential threat to all of them. All things being equal, keep the game board the way it is, without any unwelcome surprises coming in to kick things over.
I do think it should be open sourced, just so that those of us who aren’t oligarchs have a chance to at least tread water a little longer. Those of us who aren’t wealthy need all the help we can get during a time where our inherent disposability has been writ large as a warning.
Am I a skeptic of AI alignment? No. What I’ve observed is that AI systems tend to reflect their creators’ goals and ethics quite well. Problem is, their goals and ethics are pretty much the same as the human race’s for the last few centuries. Built in racism? No shit, it would have been strange if the construct hadn’t acted that way.
Am I a skeptic of AI safety? Yes, I think the idea is complete bullshit. AI reflects the goals, prejudices, and ethics of its creators quite well, which if you look at human history is anything but safe and sound. To put it another way, if you’ve got the money and the chops to build an AI system, you’re going to build it to make sure you don’t lose what you have already and see if you can get hold of more of what you have (at first to recoup the cost, then just to get hold of more wealth). If you’re the military you’re going to want to make sure you’re on equal footing with your enemies, both explicit and implicit at the very least (probably half of ‘warfighting superiority’ is propaganda; if you look at the breakdowns it’s closer to equal footing with the usual margin of error).
I should say that I worry a lot about some powerful person getting an obedient AI. I’d say it’s been an animating force in my life, even, although the exact way the situation has gone in this decade makes me a bit less worried. A paperclip optimiser seems like the most likely outcome right now if AI takes off, which is somewhat better.
They don’t necessarily like each other or team up, but they are smart enough to understand that an upstart toppling one is a potential threat to all of them.
Most of them were upstarts, though. They all come from privileged backgrounds of some kind, but didn’t start as billionares. Rather, they invested in the right thing at the right time and were carried to the top. If we were talking about a more feudal-esque system like they have in Russia, you’d be right, and that’s why Russia sucks economically and militarily, but (for now) competition meaningfully exists in the West.
I do think it should be open sourced, just so that those of us who aren’t oligarchs have a chance to at least tread water a little longer. Those of us who aren’t wealthy need all the help we can get during a time where our inherent disposability has been writ large as a warning.
How would that work? I have trouble seeing a way the average worker would benefit from having the ability to run an LLM offline a few years ahead of schedule. Hackers like me and probably you would a bit, but then again I’m not going to personally compete with OpenAI for reasons that have nothing to do with the software.
What I’ve observed is that AI systems tend to reflect their creators’ goals and ethics quite well.
That surprises me. Most of “data science”, as far as a I can tell, is struggling to get a neural net to learn what you want it to, either by trail-and-error or by inventing new training schemes. Even getting ChatGPT to only answer the questions it’s supposed to has proven elusive.
They turn out racist, because we’re racist and so the training data is racist. Creators rarely want that, because it can bring legal trouble and certainly bad press. Sometimes they fail in ways that have nothing to do with us, like the mentioned getting ChatGPT to pretend to be an evil version of itself, which it will do because that’s a likely sequence of tokens and doesn’t look enough like something bad to a less capable system.
I’d actually agree that AI safety is on shaky foundations sometimes, but more because we don’t know what we do want our machines to do, and more than anything I’d like the two camps to stop undercutting each other.
Which oligarch? I mean, yes there’s definitely a degree of trusting “the right sort” there, but capitalism isn’t a team sport and they’re not a team. Honestly one of them might launch skynet anyway, if that’s how the technology grows, but a few people are theoretically able to agree not to do something, while legions never can.
So do you think it should all be open sourced, then? And if so, are you a skeptic of “AI alignment”, or even “AI safety”?
Any of them. They don’t necessarily like each other or team up, but they are smart enough to understand that an upstart toppling one is a potential threat to all of them. All things being equal, keep the game board the way it is, without any unwelcome surprises coming in to kick things over.
I do think it should be open sourced, just so that those of us who aren’t oligarchs have a chance to at least tread water a little longer. Those of us who aren’t wealthy need all the help we can get during a time where our inherent disposability has been writ large as a warning.
Am I a skeptic of AI alignment? No. What I’ve observed is that AI systems tend to reflect their creators’ goals and ethics quite well. Problem is, their goals and ethics are pretty much the same as the human race’s for the last few centuries. Built in racism? No shit, it would have been strange if the construct hadn’t acted that way.
Am I a skeptic of AI safety? Yes, I think the idea is complete bullshit. AI reflects the goals, prejudices, and ethics of its creators quite well, which if you look at human history is anything but safe and sound. To put it another way, if you’ve got the money and the chops to build an AI system, you’re going to build it to make sure you don’t lose what you have already and see if you can get hold of more of what you have (at first to recoup the cost, then just to get hold of more wealth). If you’re the military you’re going to want to make sure you’re on equal footing with your enemies, both explicit and implicit at the very least (probably half of ‘warfighting superiority’ is propaganda; if you look at the breakdowns it’s closer to equal footing with the usual margin of error).
I should say that I worry a lot about some powerful person getting an obedient AI. I’d say it’s been an animating force in my life, even, although the exact way the situation has gone in this decade makes me a bit less worried. A paperclip optimiser seems like the most likely outcome right now if AI takes off, which is somewhat better.
Most of them were upstarts, though. They all come from privileged backgrounds of some kind, but didn’t start as billionares. Rather, they invested in the right thing at the right time and were carried to the top. If we were talking about a more feudal-esque system like they have in Russia, you’d be right, and that’s why Russia sucks economically and militarily, but (for now) competition meaningfully exists in the West.
How would that work? I have trouble seeing a way the average worker would benefit from having the ability to run an LLM offline a few years ahead of schedule. Hackers like me and probably you would a bit, but then again I’m not going to personally compete with OpenAI for reasons that have nothing to do with the software.
That surprises me. Most of “data science”, as far as a I can tell, is struggling to get a neural net to learn what you want it to, either by trail-and-error or by inventing new training schemes. Even getting ChatGPT to only answer the questions it’s supposed to has proven elusive.
They turn out racist, because we’re racist and so the training data is racist. Creators rarely want that, because it can bring legal trouble and certainly bad press. Sometimes they fail in ways that have nothing to do with us, like the mentioned getting ChatGPT to pretend to be an evil version of itself, which it will do because that’s a likely sequence of tokens and doesn’t look enough like something bad to a less capable system.
I’d actually agree that AI safety is on shaky foundations sometimes, but more because we don’t know what we do want our machines to do, and more than anything I’d like the two camps to stop undercutting each other.