The A.I. Thread of OMG We're Being Replaced

For a few years, until Amazon manages to automate away the need for a workforce.

But it’s fine, people will all still have jobs doing… something. Everything is fine and there’s absolutely no reason to be concerned about the state of our economic system. Yay late stage capitalism!

We can use ChatGPT to give it a few talking points and write all the boring parts of reports, then we can create another AI to read the reports and filter out all the boring stuff into a few points.

Thats exactly my point. They’re not uniquely qualified and in fact their careers have given them skills that won’t transfer to anything else.

Surely there are already AI training to “reverse-ChatGPT” text to its source prompts. At this point it’s just a half degree away from re-inventing NFTs of text, with all the same custom silicon and power waste.

They’ll just get new skills. Or rather, their actual skills are based on their ability to learn how to do useful things. The “useful things” that we do change over time, and always have.

My money’s on the latter as far as the general solutions go, which is great in itself.

That said, there will be plenty of advances in specialized automation, with more industries experiencing the kind of economic changes that the auto industry went through as robotics developed. This will require economic restructuring. Socialist nations are likely to fare better than corporate nations at the population level, simply because of which goals are prioritized.

The level of manual intervention may be substantial, but even if you end up going through all of it line-by-line, you essentially get to skip straight to code review. It’s still useful, even if it isn’t magic. In fact, this may be an ideal use case. Statistical models tend to be good at “remixing” existing data, which is what migration work is, whereas to implement a truly new idea, you’ll probably still need a proverbial John Carmack. (Actual John Carmack is in pursuit of true general intelligence, ironically enough.)

Five years ago I made an offhand prediction to a venture capitalist friend of mine that AI would put me out of a job as an engineer in ten years. I felt a bit bad when that prediction turned out to influence government policy (said VC was a government advisor to a Nordic country on all things digital, and ended up arguing against expanding classical CS education to earlier grades), because while I could give my reasoning, it was based on extrapolating trends rather than hard evidence.

At this point, I’m a lot more confident in that prediction; progress has been faster than I thought then and is still accelerating. Software engineering jobs of some kind will exist in five years, but most of them will look completely alien to us. The same for most creative content creation. You still won’t have the AI decide on what to do, but they will do the vast majority of the execution with the human skill and judgement being mostly in the iteration: deciding how good an output is, and guiding the AI to better results by refining the request and regenerating.

What kind of creative jobs (which I think software engineering is!) will survive? It feels like only things where someone is willing to pay the luxury tax of getting something “handmade”, and where the non-AI nature is verifiable (e.g. live performances).

That guy’s response was to… Decrease CS education?

Right. It’s harder to guess, but all it is for now is a tool to get through the bullshit busywork. And, yet, I have doubts even it can endure using SAP.
On the flip side, automation exacerbates the lack of private savings and thus the lack of demand, so, fun times ahead. And fraud and scams should also escalate fairly quickly.
And if/when the magic happens, all of less than a handful of companies would have the capital to control the means of production, even more fun times!

So, socialism or barbarism?

por qué no los dos?

My concern, even more than job replacement, would be how if low brow / middle ground of ‘good enough’ work goes to AI, it decreased the personnel trained in those areas, limiting the pool and availability of people to progress to the top of the game… and then when things go to shit, no one knows how to fix the bottom/middle things anymore. Basically that we’d technically regress to a society blindly reliant on our AI servebots, and we’d be kind of like how English upper class gentry, once they’d lost all their servants to ww1 and factory work, couldn’t even boil themselves an egg.

It almost feels like despite any progress in AI, we need to run a parallel economy still staffed by people, to not lose the knowledge and experience that built the AI and keep progress rolling. (Assuming the AI doesn’t start outpacing people in innovation as well , I guess.)

I watched a great interview with John Carmack where he discusses what he hopes to achieve with his new AI company. Specifically AGI (Artificial General Intelligence). Agents that can “understand or learn any intellectual task that a human being can”. He discusses how he sees this being possible by 2030. The ability to “spin up” artificial remote workers that you could set to any task.

EDIT: I linked the relevant part of the interview. It starts at: around 4:02:55

At which point, it’ll very likely be an enslaved conscious being, at least some people believe that is likely required to properly interpret and accomplish complex tasks that require abstraction.
It may also still consume more power than a human, so, who knows if and when.

Evidently recent versions of ChatGPT have gotten much better at Theory of Mind tasks, where you have to understand what someone would infer from a more limited set of facts than are available to you. This is surprising since Theory of Mind isn’t a skill specifically trained for, so it is somehow arising from improved general language competency.
(Link to arxiv pre-print)

To answer the most obvious objection, the study authors went to some length to insure that the large language models couldn’t pass merely because they’d seen ToM tests discussed before:

Meanwhile, short fiction publications are getting buried in Chatbot-generated bullshit submissions:

http://neil-clarke.com/a-concerning-trend/

Just use ChatGPT to review the submissions, duh.