Embracing AI: The New Search Engine of the 21st Century

January 20, 2025

I've been using AI LLM Models like Claude and ChatGPT for over a year now, to develop and work on in-class projects that require us to write and produce webapps and software.

However, I think there downsides to the upsides of integrating it as a tool, and I'll discuss them briefly here

Efficiency and Speed

LLMs, when properly used in a workflow, such as software development seems to accelerate the timescale of product delivery.

It also gives examples rather quickly, for example guidance on creating a custom component, or inspiration on changing a component feature.

I find that it does a good job of producing MVPs and proof-of-concepts, while allowing one to iterate on those concepts.

Finally, I think it provides information quickly and efficiently, like how I envision Google to be, but faster and without advertisements.

Occasionally Hallucinates and Inaccurate

LLMs tend to hallucinate, and gives inaccurate information on specific documentation and bugs. Although it does give a series of troubleshooting steps, it fails at resolving the issue without a larger context.

I often had to go on Stackoverflow, or debug the issues myself, rather than inputting the error through a LLM over and over again, hoping that it would resolve it.

Finally, I found looking through documentation to be helpful at times, when it came to bugs and specific errors.

Conclusion

LLMs can be helpful at accelerating workflows, but it is often inaccurate, and needs to be double-checked for accuracies and correctness.

For humans to use them effectively, I think its important that they take the time to learn the material and understand the fundementals of whatever they are applying the LLM to, then using it in a targeted manner to accomplish whatever it is they want to do.