Development, security, and operations teams are increasingly turning to artificial intelligence to boost productivity and slice through grunt work. But experts warn that AI isn’t a jack of all trades yet, and it shouldn’t be treated as such.
Nearly 70% of surveyed DevSecOps teams plan to use AI in software and development, and 23% reported that they already do, a September GitLab study found. Among the 1,000 respondents in the DevSecOps sector, 83% said they believe AI implementation is “essential” to keep from falling behind.
The GitLab study hinted that AI can be applied more broadly to DevSecOps work than it is currently. According to the survey, developers spend 75% of their time “on tasks other than code generation—suggesting that code generation is only one area where AI can add value.”
David Santos, GitLab’s chief product officer, told IT Brew that testing, code review, and vulnerability patching could all be candidates for AI usage outside of code generation.
“Even if you just focus on making your developers more efficient, something around them will break. And to really apply AI successfully into software development, you have to apply it across the entire life-cycle—so that other 75%,” Santos said.
However, GitLab warned that security, privacy, and the use of intellectual property are common “stumbling blocks” for developers using AI. Nearly one-half of the survey respondents cited copyright protection ambiguities as a main concern, and almost 40% said they’re worried that AI will introduce security vulnerabilities.
Santos pointed out that there are well-documented concerns over whether products aided by AI will inadvertently include—and expose—protected information. As Gizmodo reported, sensitive information fed into AI models like ChatGPT can have a second life-cycle as the prompts may be used to further train the bot. For example, Samsung employees came under fire this spring for copying confidential code and meeting notes into ChatGPT.
One DevOps engineer told GitLab that the best place for AI is when it’s used to automate “simple, repeatable tasks.” While AI can streamline some routine responsibilities, “the humans involved have to be aware and responsible for what the AI is generating,” the engineer noted.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Human responsibility for automated outputs is a common thread among both AI pessimists and optimists. Charity Majors, CTO at Honeycomb, who is self-admittedly skeptical about many AI uses, told IT Brew that it’s important to keep humans in the driver’s seat of any project or business area that relies on AI.
“We really need to look for ways to let machines do what machines do well—just crunch lots of numbers—and for people to do what people do well, which is [to] interpret things and attach meaning,” she said.
Majors is particularly optimistic about the usage of generative AI, which can help developers sort through large troves of data, discern patterns, and spin up preliminary code. She likened its use in DevSecOps to Google’s autofill feature, which can help flesh out queries quickly as a starting point. However, humans must still format, integrate, and test the code before putting it into production.
For Santos, additional training around how to use AI effectively can help developers understand that they won’t become obsolete anytime soon.
That’s an undertaking that often starts at a high level: More than one-half of the survey participants said their organization “has hired or will hire new talent to manage the implementation of AI in the software development life-cycle.” Respondents also cited educational courses, practicing with open-source projects, and learning from peers and mentors as some of the top resources they reach for to build AI skill sets.
“More than half the respondents shared a concern that their job could be replaced by AI in the next five years,” Santos told us. “We need to reemphasize the importance of the human component to how we adopt AI as both someone using AI to build their software, but also AI as part of delivering their software.”