IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

CTOs reflect on ChatGPT's impact & future challenges

Yesterday

As ChatGPT marks its second anniversary, several Chief Technology Officers (CTOs) have shared insights on the digital tool's impact and their future expectations.

Paul Maker, CTO at Aiimi, a data insights firm, highlighted the transformative potential of using Large Language Models (LLMs) in automating and designing business processes. He said, "We all know that Large Language Models can automate business processes. But models which can design new processes on the fly without the end user having to 'build' them would be truly game changing. We've already made breakthroughs at Aiimi. For example, I have been working on using OpenAI to automatically create bespoke Subject Access Request (SAR) processes through complex information retrieval, to help teams looking to discover and protect data more effectively and efficiently."

Chris Agmen-Smith, CTO at Patchwork Health, which collaborates with the NHS, noted that ChatGPT has enhanced their programming capabilities. He commented, "ChatGPT has helped me gain near-instant fluency in new programming languages. Based on knowledge I already have of equivalent concepts in other languages, ChatGPT (via Copilot) prompts us with blocks of code."

Edd Read, CTO at tiney and former CTO at Graze, mentioned ChatGPT as a solution to creative block. He said, "For me, ChatGPT is an incredible anti-procrastination tool. When I'm staring at a blank document, fiddling with a sensitive explanation, or dreading writing a tricky email, I tend to stall. ChatGPT is amazing at helping me get over that initial hump. You can chuck in a load of unstructured thoughts, with no fear of judgement, and its responses are usually enough to get me started. I use it for this purpose every day."

Reflecting on challenges, Paul Maker shared a notable failure when attempting to use GPT models for coding adjustments. He explained, "I once tried using a GPT model to improve the user interface of a component, in line with the look and feel of the existing one. The problem is that you can't give GPT tools enough 'context' - or code - for this. So the result was a spectacular fail. The model got the primary brand colours right. But applied them to all the wrong places."

Chris Agmen-Smith discussed the issue of hallucinations in AI outputs. He cautioned, "The hallucinations. It's easy to believe that ChatGPT is actually 'thinking' when it replies to you... In any format where ChatGPT is generating pure text for a human to read, it's not so easy to spot the errors. In those cases, identifying hallucinations requires independently verifying every single assertion."

Edd Read discussed the limitations of ChatGPT in technical tasks. He stated, "A trap I find myself falling into sometimes is assuming that ChatGPT can do too much. In particular, when I work with technical coding tasks... I'll spend all this time going round and round working it all out, when if I had just written the code from scratch myself, I'd have completed the task in half the time."

As for the future, Paul Maker emphasised the importance of data readiness for upcoming AI tools like GPT-5. He observed, "Businesses will be hasty to adopt GPT-5... But they'll struggle to use the technology safely and successfully without the right data foundations in place. Data should be accurate, up-to-date, and properly stored... Few companies are in this position right now."

Edd Read anticipated increased awareness and regulatory developments around AI content. He predicted, "I believe that AI in general will start becoming incredibly powerful when developers integrate it more with operating systems in smartphones and laptops. I also think we're going to start seeing people getting more wary of AI-generated/edited content in day-to-day life and, as a result, more tools for detecting and flagging it."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X