Over the last few years, I’ve been tracking several experiments for how I incorporate LLMs into my workflow.
Generally, it’s been a value add to get past the cold start problem, prototype something real, feel faster, and generally learn more than I would have just puttering around to the first MVP. As well as ground the clouds.
But I find it’s easy, like with any new technology, to get carried away with its application.
I think of this as taking the car to the store a few blocks away, only to spend more time looking for parking than it would have taken to walk.
I recently found myself in exactly this situation when revisiting a long-running project: Logolens.
Some context I started working on Logolens after a discussion with a friend, and it felt like a compelling opportunity to explore my backend development skills. So every few months, when AI feels like it’s gotten meaningfully better, I tend to add something new to the project.
Originally local lens was to be a tool that would act as a database of logos so you as a designer upload the logo you were making and cross reference it with other logs in the database what types of overlapping similarities being used has someone else made the logo you thought of before are you the first one to make that logo, etc. and just kind of be a central repository for exploring logo inspiration a verifying that you're not doing work that's been done before
At first, this was simple: having LLMs help me write Docker containers for image processing, learning about the various methods used for image feature matching, and figuring out which one was best suited for my project.
But then I got more comfortable with LLMs and thought—hmm, can I just pass the image directly to the LLM and have it do all the classification, feature matching, metadata extraction, and color extraction?
That was clearly too much. Data was getting sloppy. The image sets were no longer normalized. What had been a focused tool was becoming a poorly functioning product.
So I scoped back. What was the LLM actually good at? Teaching me and writing code. What if I limited it to just that helping me think through architecture, reason about cost structures while I kept evaluating the results? Keeping a human, me, in the loop.
I had a good base dataset, so I started there. I gave Claude access to the full project directory, created agents for each service, and had them all reference the architecture doc so they shared context for what was being built and what the scope of their role was.
Now the LLM handles what it’s good at: teaching, scaffolding, writing code, and the system does the pixel-perfect computation needed for things like identifying patterns in logos or normalizing datasets.
This outcome has made me more optimistic that LLMs are moving quickly, but there is a lot of complexity they can't do yet, and will eventually. But there is still plenty of space to polish a product and thus a need for a human to be in the loop for most things, and I wonder if that will always be the case.