For really useless call centers this makes sense.
I have no doubt that a ML chatbot is perfectly capable of being as useless as an untrained human first level supporter with a language barrier.
And the dude in the article basically admits that’s what his call center was like:
Suumit Shah never liked his company’s customer service team. His agents gave generic responses to clients’ issues. Faced with difficult problems, they often sounded stumped, he said.
So evidently good support outcomes were never the goal.
Agreed. Should we also mourn for the horse and buggy drivers? The gas station attendants? And the whole slew of jobs that have become obsolete over the centuries?
I do think we need something like UBI and I’m not ignoring the lost jobs but shit jobs shouldn’t have to exist. I’ll mourn for the workers but not for the job. Continuing to employee people to do thankless/hard/dangerous/etc jobs is just silly.
deleted by creator
Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
deleted by creator
Who also don’t have the information or data that I need.
It isn’t going to completely replace whole business departments, only 90% of them, right now.
In five years it’s going to be 100%.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
Check out this recent paper that finds some evidence that LLMs aren’t just stochastic parrots. They actually develop internal models of things.
deleted by creator
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.
A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.
There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.
It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.
I’m pretty sure it’d be way nicer experience for the customers.
Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is…
deleted by creator
defacto instant reply
Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.
if trained right, way more knowledgeable that the human counterparts
Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.
At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.
Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.
While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.
Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.
I’ve worked in this field for 25 years and don’t think that ChatGPT by itself can handle most workloads, even if it’s trained on them.
There are usually transactions which must be done and often ad hoc tasks which end up being the most important things because when things break, you aren’t trained for them.
If you don’t have a feedback loop to solve those issues, your whole business may just break without you knowing.
I think you’re talking about actual support, that knows their tools and can do things.
This article sound more about the generic outsourced call center that will never, ever get something useful done in any case.
I ordered Chipotle for delivery and I got the wrong order. I don’t eat meat so it’s not like I could just say whelp, I’m eating this chicken today I guess.
The only way to report an issue is to chat with their bot. And it is hell. I finally got a voucher for a free entree but what about the delivery fee and the tip back? Impossible.
I felt like Sisyphus.
I waited for the transaction to post and disputed the charge on my card and it credited me back.
There’s so many if-and-or-else scenarios that no amount of scraping the world’s libraries is AI today able to sort out these scenarios.
Yes these kind of transactions really need to be hand coded to be handled well. LLM’s are very poorly suited to this kind of thing (though I doubt you were dealing with an LLM at Chipotle just yet).
Maybe you work at a decent place but in my experience you’re really overestimating the people who answer calls and give generic responses.
Cheaper than outsourcing to poor countries with middling English speaking capability.
Coming to call center lines near you: voiced chatbots to replace the ineffective, useless customer support lines that exist today with the same useless outcomes for consumers but endless juggling back and forth without any real resolutions. Let’s make customer service even shittier, again!
If you bought the product we don’t need to worry about losing money anymore bro
On one hand, they’re crap jobs. On the other hand, in most economies we have crap jobs not because they’re necessary for productivity, but to give us an excuse to pay people to live.
Maybe if enough jobs are lost to automation, we’ll start to rethink the structure of a society that only allows people to live if they’re useful to a rich person.
Essentially, we’re just still doing feudalism with extra steps, and it’s high time we cut that nonsense out.
I think once workers can be replaced, there will be some virus that wipes out most of humanity. No point keeping billions of people around if they aren’t needed.
Username checks out… suffice to say that a time of increasing social unrest is on the way, when it’s even easier for the haves to sideline the have nots than it already was.
I don’t know, I just think its obvious that the rich guys views ordinary people as useless eaters.
We have crappy jobs because jobs need doing and it was still cheaper to get humans to do it without a substantial loss in functionality. They don’t exist because of some form of social altruism, as evidenced by the fact that as soon as a semi-viable alternative is offered then the jobs are gone.
With the dynamic shifting to automation, prematurely I would add, then employers are seeing a much cheaper way to achieve 80% of what they currently offer.
When I think of crappy jobs I think of a number of different sets.
Busywork for the extra hands in the clerical pool. This is the stuff that defines the careers of a lot of people in developed countries, in which they’re hired and trained, may even work on projects for a while, and then are dropped into a holding cubical and tasked with sometime benign but probably useless (say entering archived paper files from decades ago into the new data system in case we need them someday – I did that.) Here in the states (and according to anecdotes, the UK) we have a lot of this kind of work, and while it should only be a temporary measure between company projects, entire department clerical pools have been stuck in such holding patterns for years at a time.
It happens for two reasons I’ve seen: One, the economy tanks such as during the subprime mortgage crisis of 2008, in which lower management in a grand effort of humanitarian desperation, tell the upper management that no, my crew are working hard and very necessary in hopes that theirs is not a department that gets eliminated during the downsizing. (These managers in question are covering their own butts too, but ones I talked with recognized that anyone they dismissed would be eating ramen in a month). And two, the mismanagement of responsibility in linking tasks that need to be done with worker pools capable of doing them. Either the managers tasked to making such links are overwhelmed, or the process of connecting pools to duties is distributed so broadly that it’s de-prioritized by everyone. If those tasks are particularly odious (say they involve interacting with a toxic upper manager) then the lower management will find reasons that their own pool is not able to help, and so the company has simultaneous worker shortages and surpluses. For large, multinational conglomerates, this sort of thing is routine.
Jobs that are facades to cover for social or moral obligations that are expected of the company, but (from the perspective of shareholders) are too expensive to actually do, such as the faux tech-support services the US exported to phone banks in India that are limited to some very short troubleshooting trees rather than someone actually familiar with the technical aspects of the product. This is (I think) what the business owner of the article is talking about replacing.
Now what he should be doing is hiring a tech service and including the troubleshooting tree in the manual, what is typically done with household appliances. The workers on that phone bank are being set up with pressure by angry customers to offer some productive solutions, while also getting pressure from management to placate the angry customers, for which they have insufficient facilities. I’m reminded of my own experience being told by upper management I should be spending only fifteen minutes explaining to customers how to install CD-ROM drives (to MS DOS, mind you), which it usually took forty-five minutes to an hour to walk a non-geek through the process.
Such jobs shouldn’t exist, rather the company should actually hire real departments to deal with social responsibilities, rather than front veneers and marketing campaigns, but that’s a problem intrinsic to the system and not one that will be solved with LLMs given the same short troubleshooting trees. (An LLM with a big troubleshooting tree developed by a serious tech team might work, but would require ongoing development and maintenance, and the occasional tech-support call with a human being. Also a better LLM than we have.)
Jobs that are odious because they’re labor intensive, hazardous, tedious, frustrating or otherwise taxing on the worker, and yes there are a lot of necessary tasks that need to be done that fall into these categories. So when you say We have crappy jobs because jobs need doing, I assume you’re talking about these.
Because we’re in a capitalist system that mandates shareholder primacy, our companies first seek out a labor pool they can exploit since they don’t have any other choice. This is classified as bonded servitude, id est slavery but we don’t like to call it that when an enterprise uses human beings like interchangeable, disposable parts. Historically, we’ve hired children, exploited prison populations, immigrants, invoked a truck system, a culture of obligatory productivity, whatever, anything to force our fellow human beings to toil under cruel conditions.
Without an exploitable population enterprises face labor unrest (unions are the least violent version of this we know) in order to improve conditions and compensation, leaving industries to either capitulate and pay extra and provide proper gear or to automate wherever they can.
I imagine in collectives, everyone eventually gets pissed off from drawing straws and start working on ways to make odious tasks less odious, either through automation or improving the conditions of the task that it’s no longer odious, e.g. making actual cleaning as close to Power Wash Simulator as possible.
What if we get it to agree to give us stuff for free? Is it a representative of the company or not?
You also have to have a reasonable belief the company representative is authorized to do whatever they’re doing to be entitled to it.
karens do
I see two inevitable problems;
-
we outsourced this to you because it was cheaper, if you’re using ChatGPT what do we need you for?
-
companies want people to buy stuff, but if you significantly reduce the workforce you also reduce the availability of funds to buy stuff
1, I assume you mean the business does outsourced customer service, not as an internal department.
2, universal basic income time, or let’s put people to work on creative, innovative applications not mind numbing shit
We don’t need to keep all bullshit jobs around. The printing press putting hand written scribes out of jobs was a good thing. This is similar. New jobs will be created that will hopefully create more productive work.
-
Seems like a good way to get the “agent” to agree it’s in the wrong, and get 100% refund
I’m interested in if the AI agent has the power to disseminate refunds or at least return authorizations.
One of the things fascinating to me is that some of the problems humans are bad at handling (such as social engineering) AI tends to be even worse at.
I mean, if you go to your credit card provider with a copy of the log with their rep, and the rep says “i authorize a refund”, you can atleast make the argument.
Any company scummy enough to trust an AI for this wouldnt give it the authority, though
Remember when AI was going to make life better for everyone?
Yeah. That shit’ll be the end of us.
AI will make the life better for the shareholders
Hopefully it’ll be the end of capitalism. How is the economical model supposed to function when nobody is working? Where are people supposed to get money from? How is anything going to be taxed?
Realistically though it’ll somehow push capitalism into hyperdrive and enslave the global population under the control of the AI owners.
It didn’t work that well when people were working anyway.
It won’t for as long as all the power is in the capital owner’s hands.
A lot of jobs are just busy work that does nothing and makes nothing. Talking about automating them misses the point of why the jobs exists in the first place.
“I see that you are throwing a ball at a target that is connected to a platform with a human sitting above a tank of water. Here is a AI generated picture of a random human underwater to sate your needs. Ya! I have made this process 200% more efficient!”
It’s crazy how people seem fundamentally incapable of looking at the big picture and ask themselves things like, “what even is the purpose of society? Is this the best society humanity is able to come up with? What if I am not ready to accept society as it is presented to me, what are my alternatives, do I even have any? What are my obligations towards a society that marginalizes me and treats me like a second or third tier human, without any hope of ever improving my lot?”
Ask people if they would rather be free and get everything they want without having to work for it. The answers you’ll get will boggle your mind.
We’ve been permeated by the idea that “you have to be financially productive to be a decent human” for so long, even people against excessive/useless work still sometimes miss the point of this crazy race toward making more benefit regardless of anything else.
Sometimes, reaching the “it works” point is enough, but higher ups never stops there. It always have to be “better/more”.
I’m surprised by the number of workaholics that exist, like why do you want to work so much? Go explore the world, learn things, make things, but people want to work instead?
You still need to employ some humans as a backup when the AI catastrophically fucks up, but for the most part it makes sense. Not all jobs need to continue to exist.
Exactly. As the article ends:
Not every customer service employee should worry about being replaced, but those who simply copy and paste responses are no longer safe, according to Shah.
“That job is gone,” he said. “100 per cent.”
Working conditions in this industry are not great. The turnover rate can reach 80% sometimes. It can be a difficult, stressful and low paid job that few people enjoy. At the same time, the demand for this work keeps increasing as more and more of consumer activity shifts online and remote. It seems to me that the technology may be a net benefit in this case. The public and its regulatory authority should, however, keep a close eye on developments to make sure humans are not left behind.
This is just the smallest tip of the iceberg.
I’ve been working with gpt-4 since the week it came out, and I guarantee you that even if it never became any more advanced, it could already put at least 30% of the white collar workforce out of business.
The only reason it hasn’t is because companies have barely started to comprehend what it can do.
Within 5 years the entire world will have been revolutionized by this technology. Jobs will evaporate faster than anyone is talking about.
If you’re very smart, and you begin to use gpt-4 to write the tools that will replace you, then you MIGHT have 10 good years left in this economy before humans are all but obsolete.
If you’re not staying up nights, scared shitless by what’s coming, it’s because you don’t really understand what gpt-4 can do.
You sound like one of those idiots preaching the apocalypse from a street corner. Humans obsolete in 10 years? Yeah sure buddy, right after all those profits trickle down. This is just another tool, an interesting one to be sure, but still just a tool. If you’re staying up nights worrying about this, you don’t really understand the technology, or maybe you’re just worried someone is going to realize you don’t do shit.
I work with AI stuff, just getting into LLM, but I have been doing SD work since the public release last year. In just over 1 year the SD capability has gone from being able to draw a passable image of a cat at 512x512 pixels that required a reasonably powerful graphics card to complete to being able to create 4k images on the same cards that are nearly indistinguishable from actual photos/paintings. It is the single fastest adaptation and development of a technology I have seen in my 30 years in tech. I have actually been tracking the job market and the impacts that this will have and he is not all that far off in his estimate. The current push in AI development is nearly a ubiquitous existential threat to employment as we view it in the society of the United States. Everyone is on the chopping block and you’d best believe that the C-level executives want to eliminate as many positions as possible. Labor is viewed as an atrocious expense and the first place that cuts should be made. I challenge you to actually come up with a list of 10 jobs that employ more than 100,000 people in the country that you think would be safe from AI and I will see how many of them I can find information on someone who is already actively working on eliminating them.
Companies don’t want employees, only paying customers. If they can eliminate employees, they will. Hence self-checkouts in grocers, pay at the pump for gas stations, order kiosks at McDonald’s, mobile ordering for virtually every fast food place, the list goes on and on. These are all recent non-AI replacements that have cut into the employment prospects for people.
I don’t disagree with most of what you said. I think so far the following jobs are safe from direct AI replacement, because it is much harder to replace manual laborers.
- Oil rig worker
- Plumber
- Construction worker
- Landscaper/gardener
- Telephone repair tech
- Mechanic
- Firefighter
- Surveyor
- Wildlife management officer
- Police
What companies won’t realize until too late is that paying customers need jobs to pay for things. If AI causes unemployment to rise to some ungodly high, paying customers will become rare and companies will collapse in droves.
Thanks for actually rising to the challenge, it was actually fascinating to do the research to see how AI is affecting the various industries, and how deeply. I will say that I was able to find direct evidence of replacement in 7/10 of them, 1 was work that is similar and could easily be adapted (telecom line repair), one was an analysis that I think has a lot of good points (plumber), and one was genuinely all about augmenting the capabilities of workers already in place (wildlife conservation/officer).
- Oil rig worker https://onestopsystems.com/blogs/one-stop-systems-blog/ai-on-oil-rigs
- Plumber Answer to Can artificial intelligence replace plumbers? by George Warner https://www.quora.com/Can-artificial-intelligence-replace-plumbers/answer/George-Warner-1?ch=15&oid=179562613&share=853bcabc&srid=3hR9y&target_type=answer (not someone working on it, but a good analysis)
- Construction worker https://www.sciencedirect.com/science/article/pii/S0926580522001716
- Landscaper/gardener https://americangroundskeeping.com/ai-in-landscaping-how-artificial-intelligence-is-changing-the-game-for-landscape-and-hardscape-design/ https://ts2.space/en/ai-in-robotic-landscaping/
- Telephone repair tech https://netl.doe.gov/sites/default/files/netl-file/20VPRSC_Zhang.pdf (sorry for the PDF. It is not specifically phone lines, but the tech could be adapted relatively easily to climb a telephone pole instead of a boiler wall)
- Mechanic https://electronics360.globalspec.com/article/18552/robots-are-primed-to-replace-auto-mechanics-or-are-they
- Firefighter https://aiforgood.itu.int/robotics-and-ai-to-predict-and-fight-wildfires/
- Surveyor https://www.landform-surveys.co.uk/news/thoughts/ai-surveying/
- Wildlife management officer https://aiworldschool.com/research/this-is-why-ai-in-wildlife-conservation-is-so-glorious/ I will admit that this is a case where AI is augmenting more than replacing at this time.
- Police https://www.cnn.com/2023/06/18/asia/police-robots-singapore-security-intl-hnk/index.html This one is low-hanging fruit… I will leave it at one link.
What companies won’t realize until too late is that paying customers need jobs to pay for things. If AI causes unemployment to rise to some ungodly high, paying customers will become rare and companies will collapse in droves.
I wholeheartedly agree. Functionally, we are going to have to institute a UBI model. It is the only way that society will be able to distribute funds properly when population outpaces jobs due to the exponential growth of populations and the rapidly shrinking landscape of jobs. The corporations are going to need to pay us one way or another.
Damn… nice work on the research! I will read through these as I get time. I genuinely didn’t think there would be much for manual labor stuff. I’m particularly interested in the plumber analysis.
I think augmentation makes a lot of sense for jobs where a human body is needed and it will be interesting to see how/if trade skill requirements change.
I’ll edit this as I read…
Plumbing. The article makes the point that it isn’t all or nothing. That as automation increases productivity, fewer workers are needed. Ok, sure, good point.
Robot plumber? A humanoid robot? Not very likely until enormous breakthroughs are made in machine vision (I can go into more detail…), battery power density, sensor density, etc. The places and situations vary far too greatly.
Rather than an Asimov-style robot, a more feasible yet productivity enhancing solution is automated pipe cutting and other tasks. For example, you go take your phone and measure the pipe as described in the link. Now press a button, walk out to your truck by which time the pipe cutter has already cut off the size you need saving you several minutes. That savings probably means you can do more jobs per day. Cool.
Edit 2
Oil rig worker. Interesting and expected use of AI to improve various aspects of the drilling process. What I had in mind was more like the people that actually do the manual labor.
Autonomous drones, for example, can be used to perform inspections without exposing workers to dangerous situations. In doing so, they can be equipped with sensors that send images and data to operators in real time to enable quick decisions and effective actions for maintenance and repair.
Now that’s pretty cool and will probably reduce demand for those performing inspections (some of whom will have to be at the other end receiving and analyzing data from the robot until such time as AI can do that too.
Autonomous robots, on the other hand, can perform maintenance tasks while making targeted repairs to machinery and equipment.
Again, technologies required to make this happen aren’t there yet. Machine vision (MV) alone is way too far from being general purpose. You can decide a MV system that can, say, detect a coke can and maybe a few other objects under controlled conditions.
But that’s the gotcha.Change the intensity of lighting, change the color temperature or hue of the lighting and the MV probably won’t work. It might also mistake diet coke can or a similar sized cylinder for a Pepsi can. If you want it to recognize any aluminum beverage can that might be tough. Meanwhile any child can easily identify a can in any number of conditions.
Now imagine a diesel engine generator, let’s say. Just getting a robot to change the oil would be nice. But it has to either be limited to a specific model of engine or be able to recognize where the oil drain plug and fill spot is for various engines it might encounter.
What if the engine is a different color? Or dirty instead of clean? Or it’s night, or noon (harsh shadows), overcast (soft shadows), or sunset (everything is yellow orange tinted)? I suppose it could be trained for a specific rig and a specific time of day but that means set up time costs a lot. It might be smarter to build some automated devices on the engine like a valve on the oil pan. And a device to pump new oil in from a vat or standard container or whatever. That would be much easier. Maybe they already do this, idk.
Anyway… progress is being made in MV and we will make far more. That still leaves the question of an autonomous robot of some kind able to remove and reinstall a drain plug. It’s easy for us but you’d be surprised at how hard that would be for a robot.
Had a thought that deserved a separate post. Your selection of MV tasks was rather perverse for the tasks we were discussing. Identifying a pop can is definitely something that humans can do easily because pop cans were made for us to be able to easily identify them. Level the playing field and let’s start looking for internal stress fractures in the superstructure of a 100’ tall concrete bridge. That is something that AI drones are already being designed and deployed for. The drone can easily approach the bridge with a suite of sensors that let it see deep into the superstructure and detect future failure points. Humans would struggle to do that. I have also seen things about maintenance drones that are able to crawl on the bridge using a variety of methods (usually they are designed for specific bridges) that are able to fill cracks with sealant and ablate rust using lasers, then paint the freshly cleaned metal. The benefit of replacing a workforce with AI-driven robotics is that you can purpose-build and purpose-train the tool to do exactly what you need it to do. A robot that scurries into a crawl space to run a pipe for a plumber doesn’t need to know how to do anything but recognize where it goes, what not to touch, and how much force to use when installing it. It doesn’t need to identify a pop can, it doesn’t need to draw a Rembrandt. All it needs to do is pull a pipe and weld it in place (and yes, I am oversimplifying a bit, I know that).
The other thing that kinda gets me is the whole “cramped spaces” safety net that I kept seeing for why this job or that was going to be safe. Designing a small, agile robot is not really a challenge. Add onto it that in many situations you could use a tethered drone to do the actual work that is much smaller and the AI brain can be sitting safely outside the situation. You could even plug it into power, so battery tech doesn’t need to increase. shrug I guess I just see quite a bit of very fast advances in the tech that have a worrying trajectory to me.
All great points. I guess I need to think of this topic more from the “what is possible” mindset rather than the “this is too hard” mindset to get a fair assessment of what is coming. All while still framing it in the sense of improving worker efficiency and automating human tasks piecemeal over time.
Your points on MV are not unfounded, but they are also extremely homeocentric. All of your examples rely on the visible light spectrum as well as standard “vision” as we know it. Realistically any sensor can be used to generate an image if you know what you are doing with it. Radio telescopes are a great example of this. There is also a lot of research going on in giving AI’s MV senses access to other sections of the EM spectrum ( https://www.edge-ai-vision.com/2017/10/beyond-visible-light-applications-in-computer-vision/ and https://www.technologyreview.com/2019/10/09/132696/machine-vision-has-learned-to-use-radio-waves-to-see-through-walls-and-in-darkness/ ) as well as echolocation ( https://www.imveurope.com/news/echolocation-neural-net-gives-phones-3d-vision-sound ). There are many other types of “vision” that can be used that can definitely distinguish a popcan.
Agree that other parts of the EM spectrum could enhance the ability of MV to recognize things. Appreciate the insights – maybe I will be able to use this when I get back to tinkering with MV as a hobbyist.
Of course identifying one object is one level. For a general purpose replacement for humans ability, since that’s what the thread is focused (ahem) on, it has to identify tens of thousands of objects.
I need to rethink my opinion a bit. Not only how far general object recognition is but also how one can “cheat” to enable robotic automation.
Tasks that are more limited in scope and variability would be a lot less demanding. For a silly example, let’s say we want to automate replacing fuses in cars. We limit it to cars with fuse boxes in the engine bay and we can mark the fuse box with a visual tag the robot can detect. The layout of the fuses per vehicle model could be stored. The code on the fuse box identifies the model. The robot then used actuators to remove the cover and orients itself to the box using more markers and the rest is basically pick and place technology. That’s a smaller and easier problem to solve than “fix anything possibly wrong with a car”. A similar deal could be done for oil changes.
For general purpose MV object detection, I would have to go check but my guess is that what is possible with state of the art MV is identifying a dozen or maybe even hundreds of objects so I suppose one could do quite a bit with that to automate some jobs. MV is not to my knowledge at a level of general purpose replacement for humans. Yet. Maybe it won’t take that much longer.
In ~15 years in the hobbyist space we’ve gone from recognizing anything of a specified color under some lighting conditions to identifying several specific objects. And without a ton of processing power either. It’s pretty damn impressive progress, really. We have security cameras that can identify animals, people, and delivery boxes. I am probably selling short what MV will be able to do in 15 more years.
Pretty sure nah. But time will tell. I will believe it when I see it. AI has been coming for jobs since before terminator. It will replace thousands of jobs just like :
Washer women, lamp lighters, calculators and all the work that farm labourers used to do. Automation comes for us all.
Some jobs shouldn’t exist anyway. God the amount of office workers moving numbers from one tab to another and getting paid a bucket load.
However nursing and elderly care. Psychology counselling mindfulness teachers and jobs that are actually useful for society are probably safe. Yes ai can do some of all these things but it can’t do them all with empathy. Empathy is key to most of these human focused roles. We need more people in these roles and less working to make more money.
But a lot of jobs did get automated away. And serious consequences did occur from that. Sometimes places rebound from it, but sometimes they did not. And at some point… there will be more people than jobs for them to do, as we continue automating.
In the end, the base foundation for capitalism will be broken, and we will be in an economic crisis of unprecedented scales.
Capitalism doesn’t work. Pretty sure everyone knows that.
We don’t want to work. We can automate away every job. Then we can be free to actually pursue what we want. Humanity isn’t based on how many shiny trinkets we have.
Yes, but the problem is, we are stuck with the system until we force a societal level change. Capitalism works plenty well enough for the powerful, and they aren’t willing to let go that easily.
I don’t disagree. Bring on the revolution
“It won’t take people’s jobs! And also people’s jobs are stupid and they deserve to have them taken away!”
What jobs are “useful for society” has no impact on what jobs are actually available to society, only what is deemed “profitable” has any place in this capitalist dystopia. Nice idealism though, I hope it won’t sting too bad having it shattered growing up.
I’m grown up. It will remove jobs. Just said that. Jobs that could be automated regardless. Obviously ai will remove jobs. Just like computers did. But not ones that we actually need. Pretty easy to understand or do you need to grow up to understand that ?
If you’re staying up nights worrying about this, you don’t really understand the technology
And you think managers, the people deciding who gets replaced by AI, understand the technology?
This is part of the problem. They don’t, and won’t, fully understand the technology or its limitations or long-term impacts. They will understand that the salesman pushing the AI product told them it could eliminate 5-10% of their workforce. Whether or not the product can actually do that effectively won’t matter, they’ll still buy it, implement it, and fire a bunch of people.
I think once sap and jira start implementing a lot more AI and make it simpler to use it could cut down a lot of corporate jobs, not the hands on stuff but a lot of the simpler jobs like purchasing and inventory staff could be shrunken down to a fewer people and fewer cubicles. At least that’s what we talked about at our company how everyone is adjusting to the new world especially advertising now that everything will be served to you by a bot instead of a search
As I said, if you’re not scared shitless, you really don’t understand what gpt-4 can do.
It’s not “just another tool.”
It’s an intelligence.
This technology is more world-changing than computers, the Internet, or mobile technology. And it’s evolving faster than any of those things did.
You’ll see. Unfortunately.
How do you define “intelligence” in this context?
Do you think gpt4 is self aware?
Do you believe this LLM tech has the ability to make judgement calls, say? Or understand meaning?
What has been your experience with the accuracy / correctness of the answers it has provided? Does it match claims that mistakes or “hallucinations” occur often?
You’re wandering into one of the great questions of our age: what is intelligence? I don’t have a great answer. All I know is that gpt-4 can REASON, and does so better than the average human.
It’s gpt-4 self-aware? Yes. To an extent. It knows what it is, and can use that information in its reasoning. It knows it’s an LLM, but not which model.
Can it make judgement calls? Yes. Better than the average human.
Understand meaning? Absolutely. To a jaw-dropping extent.
Accuracy and correctness… Depends on the type of question.
What you need to understand is that gpt-4 isn’t a whole brain. Think of it as if we have managed to reproduce the language center of the brain. I believe this is mechanism for higher reasoning in the human brain.
But just as in humans with right-brain injuries, the language center is disconnected from reality at times.
So, when you think about gpt-4 as the most important, difficult to solve part of the brain, you start to understand that with some minimal supporting infrastructure, you now have something very similar to a complete brain.
You can use vector databases to give it long-term memory, and any kind of data retrieval used to augment it’s prompts improved accuracy and reduces hallucinations almost entirely.
With my very mediocre programming skills, I managed to build a system that is curious, has a long-term memory, and do a wide variety of tasks, enough to easily replace an entire customer service, tech support team, sales team, and marketing team.
That’s just ME, and working with the gpt-4 that’s available to the public with a bunch of guardrails on it. Today.
Imagine a less-restricted system, with infrastructure built by an experienced enterprise coding team, and with just one more generation of LLM improvement? That could wipe out half the white collar workforce.
If LLM improvement was only geometric, and not even exponential (as it clearly is), in 10 years these things will be smarter AND MORE CREATIVE than all humans.
The truth is that we’re going to be there in 5 years.
Appreciate the detailed response!
Indeed, intelligence is …a difficult thing to define. It’s also a fascinating area to ponder. The reason I asked was to get an idea of where your head is at with the claims you made.
Now, I admit I haven’t done a lot with gpt-4 but your comments make me think it is worth the time to do so.
So you indicate gpt-4 can reason. My understanding is gpt-4 is an LLM, basically a large scale Markov chain, trained to respond with appropriate output based on input (questions).
On the one hand, my initial reaction is: no, it doesn’t reason it just mimics or simulates human reasoning that came before it in text form.
On the other hand, if a program could perfectly simulate whatever processes are involved in reasoning by a human to the point that they’re indistinguishable, is it not, in effect, reasoning? (I suppose this amounts to a sort of Turing Test but for reasoning exercises).
I don’t know how gpt4 LLMs work yet. I imagine, being a Markov Model (specifically a Markov Chain), if the model is trained on human language then the underlying semantics are sort of implicitly captured in the statistical model. Like, simplistically, if many sentences reflect human knowledge that cars are vehicles and not animals then it’s statistically unlikely for anyone to write about attributes and actions of animals when talking about cars. I assume the LLM is of such a scale that it permits this apparently emergent behavior.
I am skeptical about judgement calls. I would think some sensory input would be required. I guess we have to outline various types of judgement calls to really dig into this.
I am willing to accept that gpt-4 simulates the portions of the brain that deal with semantics and syntax both the receiving and transmitting abilities. And, maybe to some degree, knowledge and understanding.
I think “very similar to a complete brain” is an overstatement as the brain also does some amazing things with vision, hearing, proprioception, touch, among other things. Human brains can analyze situations and take initiative, analyze things and understand how they work and apply that to their repair, improvement, duplication, etc. We can understand and solve problems, and so on. In other words I don’t think you’re giving the brain anywhere near enough credit. We aren’t just Q&A machines.
We also have to be careful of the human tendency to anthropomorphize.
I’m curious to look into vector databases and their applications here. Addition of what amounts to memory, or like extended context, sounds extremely interesting.
Interesting to ponder what the world would be like with AGI taking over the jobs of most knowledge workers, artists, and so on. (I wonder if someone could create a CEO replacement…)
What does it mean for a capitalist society with masses of people permanently unemployed? How does the economy work when nobody can afford to buy anything because they’re unemployed? Does this create widespread poverty and collapse or a post-scarcity economy in some sectors?
Until robots mechanically evolve to Asimov’s vision, at least, manual labor is safe. Truly being able to replace a human body with a robot is still a ways off due to lack of progress on several fronts.
You sound like one of those peasants standing on street corners saying, “horses replaced with fuming metal boxes in 10 years? Hah, yeah, sure buddy, right after we put a man on the moon! Getoutta here, you loon!”
There is a video from CGP Grey titled Humans Need Not Apply that is extremely relevant. It was posted 9 years ago. It’s a great video, I highly recommend everyone check it out.
Yup. This is why it is vital that we all get behind Universal Basic Income.
The jobs will leave and they won’t come back. UBI is inevitable, but if we don’t get there soon enough there will be years of suffering and poverty for hundreds of millions.
Thanks for sharing. If you see that list of type of jobs at the end, it’s easy to see which jobs could get replaced within a reasonably short amount of time. Greed will always find a way to profit from whatever development arises. If they have 1 mountain of gold, they want 2 mountains of gold.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I’m a senior Linux sysadmin who’s been following the evolution of AI over this past year just like you, and just like you I’ve been spending my days and nights tinkering with it non stop, and I have come to more or less the same conclusion as you have.
The downvotes are from people who haven’t used the AI, and who are still in the Internet 1.0 mindset. How people still don’t get just how revolutionary this technology is, is beyond me. But yeah, in a few years that’ll be evident enough, time will show.
I feel sorry for these folks. They have no idea what’s about to happen.
@flossdaily@lemmy.world
@anarchy79@lemmy.world
@SirGolan@lemmy.sdf.org
I quite agree.And, from SirGolan ref : Submitted on 3 Oct 2023 Language Models Represent Space and Time
… (from the summary) …Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
https://arxiv.org/abs/2310.02207
What makes it worse (in my opinion) is that LLMs are just one step in this development (which is exponential and not limited by human capabilities).
For example :
Numenta launches brain-based NuPIC to make AI processing up to 100 times more efficient
https://lemmy.world/post/4941919removed by mod
since I forgot what I was saying here 4 months ago I read the whole thread again and basically what I said is that I agree with what you said then (4 months ago) and I added a couple of references//ideas to make this point stronger.
Also, I have no idea why you did receive this notification only today, 4 months after the discussion. I guess the Lemmy software is buggy since for my account I did not receive some notifications in a few instances where someone replied to some of my comments and I just happened to see those replies anyway since I was reading all again.
take care, 👍
removed by mod