Why AI Won't Replace Humans So Soon
AI development has profoundly impacted our lives since COVID. Despite the growing hype, I see it this way: those who leverage AI to enhance their potential will experience exponential growth, while those who merely copy and outsource their thinking to a chat prompt will get replaced.
Since AI's inception, the hype is clear: You're getting replaced, if not already, then soon.
Recently, the 'hack' or rather, the sloppily coded mess of a dating app called 'Tea' made rounds on social media.
The critical issue was that they stored sensitive data, selfies and ID cards, in publicly accessible storage, available for download by anyone with the URL.
I assume this app, judging by its look and security flaws, is the result of 'vibe coding,' an alias for "prompt the AI until you get what you want without understanding what you're doing."
Such security issues are becoming common: API keys in the frontend, publicly accessible private data, you name it. Do you think an AI capable of replacing entire teams of experienced developers would make such obvious mistakes?
If AI struggles with cases like this, what is it good for? One key point: AI primarily consists of one model type, two if you include visual generations like images and video.
The text based models, heralded to replace humans in six months at most (for the past two years), are Large Language Models (LLMs).
They operate on tokens, but for simplicity, let's stick with text. You input text, the model learns context, and based on that input, it uses probability to determine what to output. Due to their architecture, there's a major reason they can't replace humans: they are incapable of reasoning.
This means LLMs hit their limits when complexity grows too large. They excel at tasks with repeating patterns and abundant training material, ideal for areas like website design.
So why do they expose secrets in projects? Does that mean everyone does it, and the LLM just repeats the pattern? Yes and no. Models trained on vast datasets don't distinguish between good and bad. As mentioned, they can't reason. Since datasets may include bad examples, you risk getting bad results.
There you have it: text based probability, incapable of reasoning. It outputs text based on your prompt. How can it master the daunting complexity of the real world, where humans already struggle?
From software projects to writing books, if you have skills to tackle complex, context based tasks that go beyond copying text or repetitive actions, you're safe for now. Better yet, leverage AI to elevate your potential and skillset, resist the mainstream tendency to outsource thinking, and avoid chasing instant gratification.
Learning new things has never been easier and paid such dividends.