AI is no longer optional—and CEOs know it. The question is no longer if AI should be adopted, but how.
As leaders cut through the noise of bold claims and buzzwords, the real questions emerge: What’s working? What’s not? And how are companies turning experimentation into real impact?
We sat down with executives across industries to unpack these very questions. What followed was a candid, behind-the-scenes look at the reality of AI adoption, highlighting both the opportunities and challenges of AI deployment.
If you’re looking to cut through the noise and make smarter AI bets on tools, transformation, or implementation, this conversation is for you.
Hosted by Richard Marr, Head of ANZ at DevRev, and Prithvi Sharma, Head of Product at DevRev, the evening featured special insights from Devesh Maheshwari, CTO of Lendi Group and Alex Cann, Head of Customer Operations at Immutable.
Q: What was your initial inflection point for AI adoption in your company?
Devesh Maheshwari, CTO, Lendi Group:
“My first hands-on experience with AI was at Datamesh Group, where I created a handover bot that summarized my work for the company. At the Lendi Group, we’ve embraced an AI-first strategy to transform broking activities.”
The bot was designed to answer questions and provide information after Devesh’s departure, leveraging Microsoft Copilot for email summarization and meeting integration. It consolidated all his knowledge and customer conversations from his year and a half at the company.
The result? The team continued using the handover bot for at least 15 days after Devesh left, demonstrating the practical utility of AI in knowledge transfer and continuity.
Alex Cann, Head of Customer Operations, Immutable:
“At Immutable, we accelerated product shifts using AI for fast prototype development, transforming backend engineers into frontend experts quickly using AI tools.”
Alex described a significant product shift at Immutable, a blockchain company with a strong backend engineering focus.
Immutable’s developers requested tools to help attract users, necessitating a move from backend to consumer-facing products. Building a consumer marketplace required different engineering skills, particularly in frontend development.
By leveraging AI tools such as Lovable and Cursor, Immutable was able to:
- Rapidly prototype new products.
- Cross-train blockchain engineers into frontend roles much faster than traditional methods.
Impact: What would have taken years was accelerated significantly, highlighting AI’s role in both speed of delivery and expanding team capabilities.
Q: How do you identify the right use case for AI in your business?
Devesh Maheshwari:
“We start with the problem, not the technology. By examining manual processes, particularly in customer-related operations, we decide if AI, RPA, or simpler technologies fit the need.”
Start by identifying use cases that deliver real ROI, moving beyond anecdotal benefits.
The process involves:
- Centering on customer impact.
- Reviewing current manual processes
- Evaluating if tasks can be done differently or more efficiently.
- Considering if AI is the right solution, or if alternatives like RPA or simple applications suffice.
Alex Cann:
“It’s about aligning AI use cases with strategic goals, like expediting growth or improving ROI through retraining or efficiency gains.”
For instance, here’s how AI can enhance developer productivity:
Challenge: Engineering teams were too busy delivering products to address technical debt.
Solution: Built a “dev agent” to automate dependency upgrades (e.g., using Dependabot for JavaScript).
Impact: Previously, upgrades required manual checks and QA, taking up to a day per upgrade. With 400 microservices, manual upgrades were highly time-consuming. Automation reduced the cycle time and freed up developer capacity without needing additional prioritization.
AI should be viewed as a tool to accelerate long-term strategies (3-year, 5-year, or 10-year plans). The focus should be on expediting growth adjacencies and maximizing ROI through better tools.
Q: What are the nuances in applying AI across different functions or industries?
Function and industry-specific AI applications
Alex Cann:
“It’s not a one-size-fits-all. Risk appetites differ; finance is risk-averse compared to customer support. AI tools need to be function specific, like enterprise search tailored to legal or customer service needs.”
As Alex mentioned, AI tools are not “one-size-fits-all”; deployment must be tailored to each function’s needs. Different functions have varying risk appetites. Finance and corporate planning are risk-averse due to potential high-impact errors (e.g., P&L mistakes). Customer support can tolerate more risk, as individual customer issues are less catastrophic.
The traditional process for finance workflow automation involves manual invoice processing, multiple approvals, and contract checks. Here’s what an AI-enabled workflow could look like:
- Invoices are auto-forwarded from Outlook to the appropriate system.
- An LLM or workflow tool (e.g., Zapier) matches invoices to contracts stored in SharePoint, Confluence, or Dropbox.
- Human-in-the-loop validation is retained, with a chat interface for contract queries.
This reduces manual processing time and leverages existing technology (auto-forwarding, chat interfaces).
Changing user patterns and workflow automation
The introduction of chat interfaces (e.g., ChatGPT’s chat box) has fundamentally changed user adoption patterns.
Previously, chat boxes on websites saw little engagement. Now, users expect and interact with chat-based interfaces for information retrieval and support.
This shift has influenced how companies design user experiences and integrate AI into workflows.
The integration of AI and automation is not limited to customer-facing roles but is expanding to back-office and operational functions. The chat interface model is being extended to internal functions, such as finance, to streamline processes and improve efficiency.
Q: With AI in production, which functions are seeing the most impact?
AI has moved beyond prototyping and experimentation to production use in several functions:
- Engineering: Automation of technical tasks and productivity enhancements.
- Support: Significant automation opportunities, especially in handling repetitive queries and processes.
- Finance: Workflow automation for invoice and contract management.
- Customer Experience: Use of tools like DevRev to improve internal handoffs and service delivery.
Q: What are the challenges and considerations in deploying AI effectively?
Deploying AI is not just about getting a prototype to work, it’s about making it production-ready, secure, and impactful. The transition from experimentation to full integration brings several challenges such as:
Integrating AI with backend systems: While front-end AI (like chat interfaces) can tolerate occasional errors, backend integrations demand a much higher level of reliability, security, and precision. A single misstep can have serious consequences, making this transition especially challenging.
Security, legal compliance, and ROI assessment: It’s essential that cross-functional teams—including security and legal—understand the technology to avoid delays or failed implementations due to misalignment or missed expectations.
Adoption and maturity curve: Companies must not only train models and define actions but also continuously consider the ethical implications of AI in their workflows.
For example, one executive shared that when AI voice was first rolled out, only 30% of users engaged. But after several rounds of design improvements and ethical enhancements, adoption climbed to 70%.
The takeaway: Successful AI deployment isn’t a one-time effort—it’s an ongoing commitment to refinement and responsible design.
Q: Have there been any AI experiments that didn’t go as planned? What were the lessons learned?
Alex Cann:
“We’ve seen failures when AI projects lacked clear objectives, leading to wasted resources. A well-defined use case with a clear value proposition is vital.”
While enthusiasm for AI is high, many teams fall into common traps during early experimentation:
Lack of clear objectives: A frequent pitfall is launching chatbots or AI tools without defining a specific value proposition, such as driving revenue or reducing costs.
Unclear use cases:Teams often underestimate the importance of focusing on what the AI is supposed to deliver. Vague or generic applications rarely gain traction or justify investment.
The takeaway: Do not build AI agents for their own sake; ensure they serve a clear, beneficial purpose.
Devesh Maheshwari:
“Adoption needs a clear strategy within the organization. It’s essential not just to integrate AI, but to change processes to fully realize its benefits.”
Turning AI projects into business impact requires more than just good tech:
Efficiency gains must be banked: AI must translate into real-world outcomes—either through cost savings or by redeploying staff into more strategic, revenue-generating roles.
Low organizational adoption: AI tools that aren’t integrated into existing workflows—or lack a clear go-to-market strategy—often end up underused.
Process change is crucial: Successful adoption requires changing how work gets done. AI should free humans from repetitive tasks so they can focus on more creative, high-impact efforts.
The takeaway: Adoption depends as much on operational alignment as it does on the technology itself.
Q: How do you manage hallucinations and data control in AI systems?
Managing hallucinations and controlling data exposure are two of the most critical challenges in deploying enterprise AI effectively.
The leaders discussed a combination of strategies to address these issues, drawing from real-world examples and hard-won lessons:
Limiting hallucinations through system design
- One of the most effective ways to manage hallucinations is to limit the data that goes into the system and control how outputs are generated. This includes designing the system to respond with “I don’t know” rather than guessing when context is lacking.
- At DevRev, parts of the AI pipeline fall back to deterministic solutions rather than relying solely on large language models (LLMs). This hybrid design reduces the likelihood of hallucinations.
- For document retrieval, DevRev uses a knowledge graph combining a vector database and a metadata store, leveraging traditional semantic search technology.
- Token limits and scalability issues prevent throwing all tasks at LLMs.
Evaluation frameworks and real-world tuning
- AI systems require ongoing evaluation and tuning. Leaders liken this to maintaining a “tuning machine”—an ongoing process rather than a one-time setup.
- Frameworks for evaluating output quality and flagging hallucinations are essential. Guardrails must be built in to ensure safe, predictable behavior.
- A real-world example: an AI system was built to calculate stamp duty using lendi.com.au, but it struggled with outdated UI elements and ended up redirecting the user to a government website with instructions. The interaction showcased both the potential and current limitations of AI systems—especially when interfacing with legacy tools.
Data segregation, access control, and ethical safeguards
- To ensure client privacy, data segregation is a top priority. Each customer’s data is kept in an isolated environment. Natural language interfaces are only allowed to query that specific data set.
- This approach increases operational cost but prevents cross-client data leakage, a critical need in highly regulated industries like finance.
- Queries are also constrained to avoid overreach (e.g., asking about trapped funds across unrelated clients), and backend systems are designed to respect these boundaries.
- One organization applies six-year data retention policies and ensures that systems remain boxed-in—only delivering answers from explicitly authorized datasets.
Ethical oversight and compliance
- One of the executives shared that their company must write an ethics report every six months to their board, reviewing how AI is used and the implications.
- New ISO standards around AI are emerging in the US, which require companies to think seriously about bias, fairness, and even the ethical sourcing of training data.
- They noted that much of AI development today is shaped by a small, homogenous group. While this may work for some companies, it raises questions about inclusivity and blind spots.
Broader industry concerns
- Copyright issues, data bias, and the poor quality of crowdsourced training data are often underdiscussed risks.
- An executive warned against the “digital stretch” of data—using datasets far beyond their original intent or context. This, combined with poor-quality or ethically questionable data, can lead to flawed outcomes.
- These concerns are amplified in high-stakes industries like financial services, where trillions of dollars are on the line.
Striking the right balance: LLMs vs. deterministic workflows
- For business-critical use cases, LLM reasoning needs to be paired with deterministic workflows that ensure consistency and accuracy.
- Tasks like cash handling or analytics are still best executed through traditional technologies like SQL layers and data warehouses.
- Leaders emphasized that AI systems aren’t plug-and-play. Significant upfront investment is needed to build, tune, and integrate deterministic logic alongside generative AI capabilities.
Q: Should AI replace human decision-making—and is the barrier cultural or technological?
When it comes to high-stakes decisions, such as financial controls or medical diagnoses, businesses face a critical question:
Should AI be trusted to make these calls? Or is human judgment still the gold standard?
In practice, the answer is rarely binary. It’s a nuanced mix of trust, risk appetite, and readiness—both cultural and technological.
Financial controls: The case for AI consistency
In financial services, thresholds for transaction reviews (e.g., $10,000 for AUSTRAC compliance, $1 million for executive sign-off) are meant to enforce control. But in reality:
- Human reviewers often make inconsistent decisions, especially when operating under time pressure.
- These processes are costly, and manual decision-making is prone to oversight and delay.
- AI could offer more consistent, explainable, and data-driven decision-making, reducing both cost and risk.
However, deploying AI in this space isn’t simple. Routing decisions, risk assessments, and reviewer selection all involve judgment calls. This is where a hybrid approach emerges:
- For deterministic tasks—like flagging transactions over a set threshold—rules engines are preferred over LLMs due to their predictability.
- For probabilistic tasks, such as identifying the best human reviewer or analyzing anomalies, machine learning models may provide an edge.
Yet many organizations remain conservative. The risk of an AI making a catastrophic mistake (however rare) limits how much autonomy it’s granted.
Radiology: a case study in augmentation over automation
Radiology offers a compelling test bed for AI: structured image data, a growing backlog, and high demand for precision.
- In Australia, a shortage of radiologists has created a backlog of over 2,000 x-rays—a clear opportunity for AI support.
- A “radio chatbot” was developed to accept chest x-rays and return preliminary diagnoses, proving useful in internal trials.
But real-world deployment hit roadblocks:
- Regulatory bodies (FDA, TGA, etc.) require extensive validation before AI can be used in clinical settings.
- The tool was misused in testing (e.g., users uploading images of lizards or pets), underscoring the difficulty of controlling open-ended AI interfaces.
- While some radiologists welcome AI as a second check, others remain wary of job displacement or AI error.
Still, the harsh work environment—12-hour shifts in dark rooms—makes AI augmentation an appealing future. As with many industries, the resistance is often cultural, not purely technical.
A cautious but evolving landscape
Across sectors, the move from human-only decision-making to AI-supported systems is gradual. Key considerations include:
- Cost vs. trust: Human-in-the-loop systems are expensive, but still seen as necessary for high-risk contexts.
- Automation placement: Determining where AI fits best—whether in rules-based engines or probabilistic models—is an ongoing challenge.
- Regulation and ethics: Especially in healthcare, compliance concerns often outpace technological capability.
The takeaway: Ultimately, AI is unlikely to replace human judgment entirely—but it can enhance it. As one participant observed, technological resistance is nothing new: every wave of change brings initial pushback, followed by adaptation and eventual acceptance.
Q: How can AI serve as a tool for higher-level strategic thinking according to Peter Drucker’s perspective?
Peter Drucker’s classic view that computers should handle routine tasks so managers can focus on thinking has never felt more relevant. This principle applies to the AI-driven world and is evolving into a practical blueprint for how businesses should deploy AI not just for efficiency, but for long-term strategy.
From operational aid to strategic enabler
Prithvi Sharma referenced Drucker’s insight that many middle managers struggle when promoted because they lack experience with high-level decision-making. AI can help shift this dynamic—freeing up time, mental bandwidth, and organizational focus for more complex problem-solving.
- AI is most powerful when aligned with strategic objectives, not used reactively.
- By delegating routine and repeatable tasks to AI, leaders can stay focused on areas that require human intuition, judgment, and creativity.
- This mirrors Drucker’s call for using computers to support, not replace, human reasoning.
The limits of quantification
Drawing from Drucker’s 1967 article Beyond the Numbers Barrier, Sharma highlighted a timeless truth: “We cannot put on the computer what we cannot quantify, and we cannot quantify what we cannot define.”
- Many business problems—especially those involving people, ethics, or culture—resist clear definition or measurement. These are precisely the areas where human leadership must remain central.
- Drucker also pointed out that two out of three managers fail when promoted to senior roles, often due to a lack of strategic thinking experience—reinforcing the need to carve out space for this kind of leadership development.
Learning from human delegation
AI deployment isn’t just about automation—it’s about task breakdown and smart delegation, much like working with a human team.
- Sharma noted that expecting LLMs to take on large, undefined tasks—like analyzing multiple CSVs and writing SQL—is often too much at once.
- Instead, success comes from breaking tasks down by complexity, similar to assigning work to an intern versus a senior employee.
Balancing AI’s promise with real-world perspective
The group agreed that Drucker’s vision remains strikingly relevant, with one exception: the dated language of referring to all managers as “he.” As Speaker 5 noted, the landscape is thankfully shifting toward greater inclusivity.
- AI should be used to amplify human thinking, not just optimize workflows.
- The goal isn’t just faster execution—it’s creating the space and clarity for leaders to operate at the “10,000-foot view.”
Prithvi concluded that when organizations focus AI investments on freeing up cognitive space, they unlock its full potential—not just to do more, but to think better.