Well, I also use AI a lot, and it helps me achieve a larger amount of work, in a shorter period of time. It is a great tool, but indeed, it is not able to handle larger projects, as if you give it a larger task, it starts to give tons of errors and inconsistencies in the result.
Also, I consider that the current AI model architecture is not fit to evolve as the current hype expects it to do. I mean that the current AI models have their limitations, and the current idea (coming from OpenAI’s part) of “adding more processing power and hardware” and to “build larger AI warehouses” to improve the AI models, will not give the expected results, but will provide less and less benefits with more and more money invested in AI infrastructure.
So, in my opinion, if the AI model’s infrastructure will not change or if the quantum computing will not become a mainstream reality, AI will be just another tool with is very useful, but in reality, was overhyped in its initial phases.
I want to share a specific case that highlights why it’s critical to verify AI-generated responses.
Yesterday, I asked o1, currently considered the top reasoning AI, how to safely run untrusted JavaScript code submitted by users.
It recommended a library called vm2
and even provided a complete, functional code example. The code was so well-written that it worked flawlessly without any changes.
But when I checked the vm2
GitHub repository, I found it had been abandoned due to security vulnerabilities. Its recommended replacement is isolated-vm
.
The AI’s code was 100% executable. If I hadn’t looked it up myself, no amount of unit testing would have revealed that vm2
was an insecure and outdated solution.
So, yeah…