Skip to main content
Software

So, your employee made an app: A vibe-coding checklist

How today’s software pros treat code outputs from mainstream LLMs.

4 min read

It’s happened to Dale Wesdorp, chief revenue officer at digital product builder Miyagami, and likely plenty of IT professionals these days: an employee arrives with an app built by a large language model (LLM), and they want to use it in their current workflow.

Wesdorp’s clients have vibe-coded up mobile apps, websites, games, and even niche customer relationship management (CRM) systems.

The app may work, but that doesn’t mean IT pros should throw it into production—there may be security concerns, or the code might not play well with the existing tech stack. 

We spoke with software developers about what they do with employees’ LLM output to make sure everyone stays safe and productive.

Everybody’s doing it. Wesdorp has seen both clients and colleagues turning to LLMs “left and right” for early creative ideas and proofs of concept.

A new poll from Gallup found that half of employed US adults are using AI on the job:

  • Just over four in 10 of respondents (41%) said they use the tech “to generate ideas.”
  • Virtual assistants and AI writing/editing tools led the field of commonly used tools, followed by AI coding assistants (used by 14% of respondents).

Here’s what’s on Wesdorp’s checklist:

  • Is it maintainable? Wesdorp looks for vibe-coded products with reusable, simple, modular components. You don't want 40 buttons hardcoded in 40 different places.
  • Is it secure? Hardcoded API keys, for example, could lead to access compromise if the codebase ends up in a public repo.
  • Does it touch user data? “If it does, then you should consider building it into something that’s more compliant and more secure, especially if you want to integrate it in the codebase,” Wesdorp said.
  • Is the code sloppy? Ideally, the code has repeatable code patterns and folder structure that adheres to standard best practices.
  • Does the code have automated tests? Tests built alongside the code can validate expected outputs—if a change breaks a feature, a test can catch the error automatically.
  • Evaluate criticality of an app. An internal app that doesn’t share user data or API credentials, for example, might not require rigorous checks.
Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.

Security checklist, continued. Many LLMs provide users with the actual code output for their app or game. Shubham Patil, senior software developer at truck manufacturing company Paccar, said he would want to review packages and libraries chosen by the LLM—and update vulnerable, deprecated, or outdated ones.

Companies also likely have their own public cloud environments and design libraries, often featuring company-specific brand guidelines and typography—factors that an LLM coder wouldn’t know about or automatically incorporate into results.

“LLMs can create software, but they don’t necessarily maintain it,” Patil said.

What do you do with it? Wesdorp will treat the output like a proof of concept to explore. He’ll click through flows to understand functions and desired behavior. With the help of agents and AI-assisted documentation, he has also experimented with reverse-engineering the vibe-coded output into standardized spec documents, product requirements, and roadmaps; these apply engineering standards and security guidelines that set the idea up for a production-ready rebuild, according to Wesdorp.

Richard Demeny, founder and CEO of education tech startup Canary Wharfian, will largely ignore an LLM’s output. He focuses on the app’s features, not how it’s coded.

“There will certainly be a point where you need to bring in somebody who knows what they are doing. And trust me, those developers are not going to be willing to work with an existing code base. They are going to say, ‘Okay, does this idea have merit? If it does, let’s do it the right way,’” Demeny said.

About the author

Billy Hurley

Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.