November 04, 2025

The Best Approach for Coding with Gen AI? Be the Architect … not the Laborer.

Tool-assisted coding with generative AI is reshaping how developers build. Here’s how Liquid’s tech team is using it to accelerate development while staying in control of quality.

Josh Kohler

Web Developer

Illustration of a white robotic hand interacting with a code symbol, set against an abstract background of a dark programming window and geometric shapes.

It’s pretty wild how quickly generative AI has become a staple tool in the toolbelt for … well, nearly every field at this point. But it makes sense — I mean, it’s 2025 after all! With the meteoric rise of generative AI has come many boons, many drawbacks, and perhaps even more controversy than both. But for better or worse, the writing is now officially on the wall: generative AI is here to stay. But where exactly is its place?

As the technology steps out of its infancy and into the lights and clockwork of the business world, many companies are finding it a struggle to successfully leverage generative AI — a recent report out of MIT shows that 95% of AI pilot programs are failing to deliver increased revenue. Many critics have opined that it’s all been just a fad … that the cons outweigh the pros and companies are gradually finding that out for themselves.

And yet … it can be done right.

There are a few tricks to it, and perhaps a few magic words to speak as well! However, at the end of the day, leveraging generative AI is a skill — and like any other skill, it can be learned, and requires practice to develop. Part of mastering a skill is knowing when to apply it … and when not to. The truth is, generative AI is not the Swiss army knife we’d like it to be. It’s not general artificial intelligence yet — its applicability is still more situational than universal, and just like how a sailing ship requires a knowledgeable captain to guide it to its correct destination, whether you find success with generative AI or not depends on the course you chart … with plenty of storms, icebergs, and other perils to avoid!

Luckily, there are skillful ways to harness the power of generative AI in every field, and one niche where generative AI has very high potential literally brings it all the way back to its roots: code! With the advent of tools like ChatGPT, Google Gemini, GitHub Copilot and other gen AI tools based on large language models (LLMs), using AI to write code has become a lot more accessible — and powerful. It’s opened up a whole discipline of tool-assisted coding that promises to generate revenue by speeding up and improving the development process … when used the right way. But which way is the right way?

What Gen AI is Good at … and What it is Not.

Here at Liquid, our team has found that there are certain coding tasks and roles for which generative AI is well-suited, and others for which it is poorly-suited. It’s not enough to just use generative AI … for it to be a successful endeavor, it needs to be used efficiently. This means not only using it for tasks where it produces quality results quickly, but also avoiding using it for tasks with too much complexity or which demand a high degree of expertise or a lot of revision. Let’s explore some examples:

Boilerplate (good) vs. Integration (not as good)

One use case which generative AI excels at is writing boilerplate code — that is, code which is typically simple and repetitive, such as declarations of classes, methods, and variables, implementations of CRUD (Create, Read, Update, Delete) operations, and many common API calls (such as to Amazon’s AWS S3 API). These units of code provide the backbone supporting virtually every application, yet writing them is often monotonous and takes us time which would be better served writing the unique business logic that is needed to actually solve our problems. Luckily, a short but well-written AI prompt can often rapidly generate most if not all of the boilerplate code we need to get started on the good stuff, typically needing minimal revisions. Writing boilerplate with generative AI is useful because the task is simple, and well-understood with plenty of reference material available for AI models to be trained thoroughly on.

However, you can run into trouble when trying to use generative AI to write code which performs complicated tasks, such as integrating multiple large pieces of code or completely distinct services. Frequently, AI tends to hallucinate a best guess at what you want based on the general meanings of words that should be interpreted as technical terms or framework names, or what operations it thinks an API should make available vs. what it actually makes available. The more complicated the task, the more massaging the generated code typically needs to be useful … and in the worst case it can require even more overhead work to fix what the AI spits out than it would have taken you to just write it yourself from scratch in the first place.

Researching common, well-known topics (good) vs. niche/proprietary topics (not as good)

One often overlooked use of generative AI is to perform research about topics which you may not have much knowledge about. It is common to occasionally need to use a technology or framework which you don’t have experience with. In these cases, generative AI can help provide a starting point by suggesting web resources such as documentation, relevant blog articles, and general overviews to help you understand what you’re working with — it can even point you to the most impactful communities dedicated to a topic, and help you to separate the conceptual wheat from the chaff.

However, generative AI models need to be trained to be useful — and they will perform poorly when asked about topics for which good information is unlikely to be in the AI model’s training set. Obscure topics and proprietary systems with private-access documentation may not be a part of an AI model’s training data, meaning that in such cases the best an AI model is going to generate is an educated guess at what you’re looking for, when what we really want is to have generative AI provide us with an exact and complete solution. This, again, means more time spent massaging code … and sometimes the generated code is so off-the-mark that it would take longer to fix it than to rewrite it from the ground up.

Fortunately, AI is a lot more reliable when it comes to very well-known topics and APIs which have comprehensive public documentation and plentiful reference examples available in the AI’s training set. Examples include the APIs for large public services (such as Microsoft Azure, Amazon AWS, Google, etc.), syntax for popular programming languages and frameworks, and commonly-used technical terminology.

Junior-level tasks (good) vs. Senior-level tasks (not as good)

One common sentiment often expressed is that it is best to treat generative AI like an intern — direct it to tackle simple tasks (which can often be done quickly and efficiently), while you take over for the tougher parts that deserve a seasoned developer’s careful attention. It feels much more natural to use generative AI this way, strongly resembling a pair programming exercise. This can also naturally help your AI model to gradually understand the purpose and structure of your application, as you repeatedly give it instructions that it can digest to help it stick to a consistent theme or design pattern while you code.

On the other hand, using generative AI to try and fill in a gap that only an experienced developer can fill is likely to result in more trouble than it’s worth, as an AI can easily get critical details about a complex, application-specific task wrong and mislead less experienced developers who would otherwise be relying on a senior for guidance.

Tips and Tricks

Now that we have a better idea of when to use generative AI, let’s talk more about how to use it well. Speaking from experience, there are a few usage patterns that we have noticed tend to deliver good results pretty reliably. Here are some techniques you can use to take advantage of generative AI while you code:

Directing use of best practices

A surprisingly simple way to improve the output quality of your AI model is to direct it to follow best practices for whatever language or framework you need it to work with. For example, as part of your prompt you can tell it to use descriptive variable names, generate comments that explain what each part of the code does conceptually, to pay special attention to catching, handling, and logging errors and warnings, to use security-first design patterns, and more. By having the AI model emphasize the applicable best practices, you can bias it towards producing high-quality code that is less likely to contain bugs or need tweaking.

Writing Unit Tests

An often overlooked coding best practice is to write unit tests for each piece of code written — I expect that in a perfect world every developer would love to spend the extra time needed to do so, but it is a frequent reality of business that a customer’s budget simply won’t allow for it. Luckily, unit tests are a topic for which there is a lot of reference material for AI models to train on, and they are often straightforward — if monotonous — to implement, much like boilerplate code. That means generative AI is well-suited to quickly producing reasonable quality unit tests. In many cases, you can just drop the code you want to write a test for into the AI prompt and have it spit out some unit tests which, while they may not be the most comprehensive, are certainly better than nothing.

Iterative Programming and Feature Augmentation

Another good use case for generative AI is to have it build upon previously-produced code, gradually adding features to it iteratively, describing each feature one at a time and asking the AI to implement it, then manually testing each new feature to make sure it works. Having the AI produce unit tests alongside the new features can help to ensure that the AI hasn’t accidentally broken previously-implemented features. It can be helpful to explicitly tell the AI to only make changes in one area of a code file (for example, to only modify one method and not to change anything else), keeping a “diff tool” handy for comparing differences between the old version of the code and the new version to ensure that the AI followed this instruction properly.

This development pattern can also be used to proactively add straightforward features to a piece of code which you may not need now, but anticipate needing in the future — or which even might just be nice to have, but which you can’t really justify spending much time to write. If the feature is simple enough that a generative AI model can get it right on the first try — or even get very close — that can pave the way to building featureful, easily-extensible applications without much extra time overhead.

You can also have your generative AI model refactor older code to make it more modular and reusable. As an example, it is a common experience when writing code for a method which started out simple gradually becoming more complicated and monolithic, going from tens of lines to hundreds or even thousands as business requirements change. Generative AI can be used to help break larger methods into smaller, more reusable ones — or even to reformat code to be more readable, add helpful comments to document otherwise-undocumented sections of code, etc.

Debugging and Problem-Solving

One of the most painful aspects about writing code is running into those dreaded but inevitable errors — especially the kind that keep your code from compiling or running at all. While a seasoned programmer is familiar with the most common error messages, sometimes when working with complex applications a novel error might be encountered, one which is perhaps cryptic and doesn’t actually reveal the underlying problem that caused it.

When one of those difficult error messages rears its ugly head and you can’t figure out what it means or what to do, an effective way of getting your bearings and understanding your options is to pop the error message (sometimes with any relevant code) into an AI and ask it to explain the error message and suggest steps that you can take to resolve it. Perhaps the AI will be able to intuit that you need to tweak an obscure value in the database, change a configuration setting, switch from a deprecated API call to a newer alternative, or to adjust a setting in your IDE. The more information you can give it about the problem, the more likely you are to get a solution.

Real-World Examples

Okay, this all sounds good on paper … but does it actually work in practice? Speaking from experience — yes, it does! Here are a couple of examples I’ve encountered along the way, which generative AI proved to be quite helpful for.

Writing a Log Cleanup Job

Some of our customers’ applications maintain pretty extensive logs, which is usually a big help when it comes to maintaining the application. However, these log files can grow to significant sizes over time, chewing into the application’s available disk space. To combat this, we needed to implement a job to go in and archive old log files, compressing them to save on space and deleting them after a certain period of time. However, since these applications run as a Microsoft Azure App Service, we did not have direct access to set up a scheduled task on the operating system.

At the time this request came to me, I had written similar jobs in the past but not specifically for running on the Azure platform. It’s a very simple, routine request that could be compared to writing boilerplate code, but after looking through documentation and examples for a little while, it was clear that there were many possible approaches but not clear which was the simplest and most appropriate.

So, I asked ChatGPT what the best method for our specific circumstances would be (giving it a few details), and it recommended creating an Azure Web Job to run a Powershell script. Sounded straightforward enough … but after looking through the documentation on Web Jobs, I could not easily find information about the underlying Web Job agent’s working directory and file system structure, or how to handle some of the possible exceptions that the agent could encounter and how to best log errors for future reference if the job fails.

With a little more thought, I put together a short paragraph directing ChatGPT to write a Powershell script to be run as an Azure Web Job, along with instructions on how to use the Azure portal’s interface to create the job. As part of the instructions, I included directions to follow best practices for catching errors, failing gracefully, and logging all progress, including logging the name of each log file that was archived and handling all the different possible error results of the involved file operations (reading, zipping, copying, and deleting). I also told it to make sure that the script was easily configurable for future maintenance.

Truth be told, I was not expecting ChatGPT to produce a solution that worked perfectly without the need for even a single change — but that’s exactly what it delivered! I was able to upload the script to Azure as a Web Job, configure it to run daily (ChatGPT even provided the needed crontab expression), and test it out on our development environment. Since it was configurable, I could force it to fail in various ways by, for example, configuring it to point to a non-existent directory, and changing the number of days to keep log files around before archiving them. This gave me the ability to put it into production with confidence, knowing that if the job failed for nearly any reason I would at least be able to easily identify what that reason was.

Troubleshooting Failed Hardware

At one point, I had a solid state drive suddenly fail on me, and one of my PCs would not boot! When trying to boot from the OS drive, I would just get a blank black screen with a blinking white cursor and nothing else … not even a POST beep code. While I know my way around most PC hardware well enough, it isn’t an area of expertise for me, so I found myself having trouble understanding just where the problem was and identifying how to solve it. As each of my efforts to fix the issue failed one-by-one, I could feel myself growing more and more frustrated.

Then, I had the idea to use AI as my troubleshooting “rubber ducky,” to see what guidance it could provide. After explaining the behavior I was seeing, ChatGPT suggested a few diagnostic steps to help me determine exactly what the nature of the failure was. ChatGPT was able to help me narrow it down to a disk failure of some kind, but exactly what degree of failure was still unclear.

Unfortunately, I needed more powerful tooling to do the necessary deeper investigation, and there were many options to choose from … only few of which I was familiar with. I turned to a popular recovery software compilation which I had used successfully in the past: Hiren’s BootCD, which is a Windows PE (Portable Edition) instance that can be booted from a flash drive and comes preloaded with a wide variety of disk and OS recovery applications.

However, I wasn’t sure just which application on Hiren’s BCD I could use to get the necessary information about the drive’s state of failure. With a little more prompting, ChatGPT was able to suggest a specific application which actually used to be a part of Hiren’s BCD, but which was removed from newer versions for some unexplained reason. ChatGPT was able to give me instructions on how I could manually download the application and load it onto Hiren’s BCD, and then run the appropriate scan.

Having done that, I discovered that the SSD was full of errors and that a critical sector of the drive had become damaged, which forced the drive to lock itself into read-only mode with no practical way to recover from the failure. Shucks! I would need a new drive after all, so I made a quick trip to Best Buy to pick up a replacement.

Now I had a new problem though — I needed to copy the data from the old drive to the new drive without an intermediate drive large enough to fit all the data onto. Each of the utilities I tried to use for this process failed due to the damaged condition of the drive. Fortunately, ChatGPT was able to recommend a different tool on Hiren’s BCD to get the job done — one which would power through the damaged sectors’ data and copy the entire drive’s contents “raw.”

Ultimately, this exercise ended in success, and before the day was out I was back up and running again on the new drive. In the past, troubleshooting and recovering from serious hardware failures has sometimes cost days of productivity (not to mention nights spent lying awake unwillingly mulling the problem around in my head), but thanks to ChatGPT’s ability to help troubleshoot errors and propose suggestions, I was able to get back up and running in just a few hours without all the headache of having to hunt down relevant blog articles and dig through long-buried forum posts.

Filling in the Gaps

For one request, we had a customer ask us to use a poorly-documented Sitecore feature that we had only a little bit of exposure to. While doing the implementation, we ran into an issue where the feature was actually working incorrectly, preventing us from completing the implementation successfully. After a little back-and-forth with Sitecore support, our issue was determined to be a bug and a fix was promised for the next version of Sitecore — but we have deadlines to meet and customers to please!

So we asked an AI model to provide an example of how to either fix or work around the broken feature. We were expecting that the model would stick to using the Sitecore API for this feature, since we explicitly asked it to do so. However, the AI determined that due to the nature of the bug, success would not be possible … and it provided an alternative solution instead! The alternative solution did not use Sitecore’s API, but instead implemented the bugged feature properly in native C#/ASP.NET. While this did not turn out to be the solution we had hoped for, it did actually provide us with a viable way to work around the bug and still meet our customer’s needs on a short timeframe.

Conclusions

While generative AI continues to become more and more robust as the days go by, the technology is ultimately still in its formative years, and needs proper human direction to utilize it effectively and efficiently. Jumping head-first into the AI craze and trying to work it into every business process without a clear vision is an easy recipe for failure in the long-term … but a measured, steady, and informed approach can absolutely make the gains real.

Be the Architect … not the Laborer

Many of the biggest efficiency-killing pitfalls to using AI boil down to what you’re using it for and how well you can strategically direct its capacities. At the end of the day, generative AI is a tool like any other — and every tool needs a skillful handler to use it to its fullest potential. Rather than relying on the machine to direct you, the best results come from you directing the machine. Using AI is in many respects like driving a car — it’s easy to get lost with if you aren’t sure where you’re going, and potentially even dangerous if driven recklessly … but when driven skillfully it’s very useful getting to your destination much quicker than you could using just your own two feet!

Do You Need a Partner Who Can Leverage Gen AI Effectively?

When done right, generative AI really can be a meaningful time- and money-saver — a force multiplier for your business operations. However, it takes expertise and experience to take full advantage of what generative AI can do … and to avoid all the traps that can wipe out the promised efficiency boost. If you are looking for a digital partner who can help you navigate the emerging AI landscape while saving you on both costs and headaches, drop us a line! We’d love to make generative AI work for you.