Keeping this post alive.
I wanted to create a new slider with a clear idea in mind adn I built most of it until I decided to give control to AI. I got lazy, and boy, I am sorry now. The code is unreadable at this point, too many flags and functions all over the place. I was forced to give up on it about a week ago and about 50$ā¦
What Iāve noticed is that this is definitely not true AI ā it has no real sense of context. It fixes one thing and breaks two more. For example, I had a transition with an opacity effect and a separate pixelation effect. The pixelation inherited the opacity transition, but the opacity had no pixelation. I spent an entire day trying to make the pixelation apply only to the pixelation effect while still combining correctly with opacity, and donāt tell me I did not prompt it right because I tried more than one hudred times, give it all the detrails in the works and when ask it do you undersntand it give the right answer only to mess up things with each iteration.
It would fix one issue and create two more. It doesnāt remember anything if what youāre building hasnāt been done before, which is exactly my case. It just hallucinates back and forth until you lose your mind. And of course, every time I say something is wrong, it replies, āYes, you are right.ā
Creating an app involves a huge amount of context and countless fine details that AI simply doesnāt understand. For it, everything is just a concatenation of words. The hallucinations get worse the longer you use it, and the larger the context becomes. In my case, by that point, I could not even take over as a developer anymore ā there are red flags everywhere: functions inside functions, conditions triggering other conditions, and everything tangled into an unsalvageable mess.
So the conclusion is simple: never let it take full control. At that point, itās game over ā the code becomes unmodifiable and nearly impossible to fix, even for you as the developer.
I donāt see how this could ever be true AGI, since it clearly doesnāt understand context. Explaining an app to it is just feeding it concatenated words; it doesnāt grasp scope or how to fine-tune the small details that make a project truly good and finished.
But the propaganda works, all companies fall into this mess, thinking that now skill is not required anymore. If you let an agent loose inside a commercial or already working app or platform, it will destroy it ā 100%, and if you ask for retribution, it will say,ā yes you are right :)ā. CEOās donāt understand this part, and of course, they want to replace everything so itās just them and a bunch of agents that can āread their thoughts.ā
So the conclusion, considering that this doesnāt really improve with new model iterations, is worrying. I could have used GPT-4.1 in my project and ended up with the same mess, not much difference. I expect the bubble to burst badly, because you canāt just pour billions in forever with no return yes, not much of a return if I, with 25 years of experience and solid skills towards cretive developmant, failed so bad 99% of other devs, if not more, will fail the same or worse.
Yes, it can write an app from a prompt up to lets say 99.9% but what do you do with the 0.01% wihout that it will not work, and you will not be able to fix it as a developer due to the mess of code it writes. What is the point then?
This also reflects on all vidoes on YouTube with vibe coding, all tries are unfinished, look bad all kinds of bugs. If you think you can fix such a mess as a dev, well, you are wrong, my friend.
The only sane usage of this now is to use it for compressed tasks in the code and make sure you understand and follow it so that you can give strong guidance and do not let it go to lala land⦠and for that, you need solid skills, probably is best to do it yourself.
Is it good? I honestly canāt decide anymore. I feel as confused as the mess of words it produces.
Probably is a big fat lie at this point, I donāt see this getting better, the model by design is broken!
Another aspect of all their tests with new graphs showing improvements means nothing in the real world.
Itās funny that Sam Altman talks about curing cancer ā really? More likely, AI could end up creating a new form of cancer that will be infectious and transmissible by air. This is a way better probability than fixing it. It feels like a disaster, destroying things around it and making the world worse overall, while the promise sounds like a big, fat lie.
I am so f*king angry of wasted a week on this project, but at least I understand now how this piece of sāit āAIā worksā¦
As for replacing me as a developer, that will never happen. The more I use it, the more I see what this really is ā propaganda designed to attract billions more in funding, because apparently, itās never enough.