Penguins & AI fatigue: Insights from 18th Lawtech Summit

Draftable Legal Product Lead, Dr Caspar Roxburgh, and Product Specialist, Yulia Gosper, reflect on their learnings from the 18th Lawtech Summit held in the Gold Coast on 5-6 September 2024, and share key takeaways and practical tips for law firms and legal teams.

This year at the 18th Lawtech Summit, the primary focus was unsurprisingly AI and cybersecurity, with some of the brightest minds in technology and law examining critical issues such as the ethical implications of AI, how to build responsible AI, and how to balance human judgement with technology. While these are all highly complex issues, we pulled out some practical and actionable takeaways for legal professionals wanting to embrace technological advancement.  

Former Google Chief Innovation Evangelist on penguins and being future-ready

Dr. Frederick Pferdt, Google's first and former Chief Innovation Evangelist and author of “What’s Next Is Now - How to Live Future Ready”, kicked off the conference with his keynote on how to cultivate a future-ready mindstate. He highlighted how embracing optimism, curiosity, experimentation, and empathy can empower us to recognise and seize the opportunities that shape our future.

Dr. Pferdt also talked about the “Penguin Award”, celebrating those bold enough to leap into ambiguity and pave the way for others. This leadership approach is important in the legal industry where technology is evolving rapidly, and Dr. Pferdt encouraged legal professionals to embrace uncertainty by practicing curiosity and experimentation. Being the first to jump doesn't just show courage, it sparks action, inspiring others to follow.

Read more: Why using ChatGPT helps law firms prepare for a generative AI future

Dr. Frederick Pferdt speaking at the 18th Lawtech Summit
Dr. Frederick Pferdt speaking at the 18th Lawtech Summit
Elements of a future-ready mindstate, as seen at Dr. Frederick Pferdt's keynote at the 18th Lawtech Summit
Elements of a future-ready mindstate, as seen at Dr. Frederick Pferdt's keynote at the 18th Lawtech Summit

Moving from AI fatigue to measuring ROI

However, not everyone is eager to leap into the unknown. A session led by Thomson Reuter’s Catherine Roberts (Senior Director, AI & LegalTech, Asia & Emerging Markets) and Ziggy Cheng (Business Development Manager) highlighted the common issue of AI fatigue. Many legal professionals feel overwhelmed by the push to integrate AI into their workflows, stemming from the sheer volume of new tools and the complexity of mastering them. To ease this burden, the Thompson Reuters team found in their research that regular training on AI prompting (how to effectively chat with AI) was essential to get the most out of the technology.  

The issue of putting generative AI into practice also raises questions about how we measure the true ROI of AI. While we know generative AI tools can automate significant amounts of work, how do we know how much time we’re actually saving? Joseph Rayment, Managing Director of Automatise, shared how its software, Cicero, tracks and quantifies AI contributions by measuring "AI units," offering an objective way for law firms to demonstrate how AI is reducing billable hours. Clients will benefit from reduced costs, and firms reap rewards in the form of enhanced productivity and workflows.

Why law firms need to prioritise cybersecurity now

While there was a lot of excitement about AI, cybersecurity is a growing concern. As geopolitical tensions rise, so does the frequency and sophistication of cyber threats, especially in data-sensitive fields like law. The session on national security and defence by Fortinet’s Peter Jennings (Director of Strategic Analysis Australia) and Nicole Quinn, (Head of Government Affairs, APAC) highlighted the increasing government regulations around "secure-by-design" systems, ensuring security measures are baked into technology from the outset. For example, they provided the example of designing smart TV’s that require users set a strong password for the device before they can use it.

For law firms, cybersecurity is no longer just an IT issue, it’s a strategic imperative. With sensitive client data at stake, firms must go beyond compliance, building systems that anticipate future threats. This is where leadership, once again, becomes critical as firms must be proactive, not reactive, in their approach to cybersecurity.

Read more: 10 security questions you should be asking your legal tech vendors

AI innovation vs human judgement

Professor Tania Sourdin, President Academic Senate at University of Newcastle, delivered one of the most thought-provoking sessions, examining the impact of AI on the judicial system. While AI has the potential to streamline court processes, the idea of AI making legal decisions, especially in life-or-death situations, raises profound ethical dilemmas. She referenced to a scene from 2001: A Space Odyssey, where a robot locks a human out of the spaceship, drawing a parallel to a future where AI might "lock out" human judges from critical decisions in the courtroom.

Professor Sourdin categorised technologies into three branches - supportive tools (e.g. videoconferencing), replacement technologies (e.g. task automation software), and disruptive innovations (e.g. neurotech) - each reshaping legal processes in different ways. Despite the rise of virtual platforms, most still see value in in-person hearings, a view reinforced by her work during the pandemic on the ethics of delivering death sentences via Zoom. Her research also touched on the fatigue brought on by virtual hearings, an issue that disproportionately affects women, largely due to “self-focused attention triggered by the self-view in video conferencing”, as reported by a recent Stanford study.

As firms continue to adopt new technology, it will be critical to balance the benefits of these technologies, without replacing the essential human elements of judgment, ethics, and oversight.

Read more: Balancing AI and human connection: Practical insights from the Australian legal industry

Building AI for a responsible future from attribution to knowledge retrieval

In a powerful juxtaposition to the concerns around AI, Professor Simon Lucey, Director of the Australian Institute for Machine Learning at the University of Adelaide, delivered a session on AI fundamentals, reminding us that the key to embracing AI is understanding it. Professor Lucey debunked common myths, explaining that AI isn’t just about big data and massive processing power. The real challenge lies in training AI to handle rare, high-impact events. For instance, training self-driving cars for unique, unpredictable situations, like a child chasing a balloon into the street, is incredibly difficult due to the lack of data. Understanding the fundamentals of AI helps you to know where to look for and find errors in these systems.

Professor Lucey emphasised the importance of designing AI systems that can protect themselves from misuse. We need separate AI systems that can look for loopholes and then build safeguards in other systems. Continued investment in ethical AI is crucial to ensure these systems remain responsible and secure.

Attribution was another key issue Professor Lucey addressed. In the creative realm, AI systems often rely on vast datasets that include work from uncredited artists and creators. Ensuring that AI can trace its outputs back to the original sources and provide proper attribution is essential in maintaining fairness in the creative ecosystem.

This led to the issue of data retrieval. AI models like ChatGPT are often based on static data that can become outdated. One approach to solving this is Retrieval-Augmented Generation (RAG), which allows AI to connect outputs to real-time data points, ensuring up-to-date information. He recommended exploring tools like Perplexity, which are built on RAG principles and can ensure the information provided by AI is always current.

Ultimately, AI can’t just be about solving today’s problems; it needs to be built responsibly for the future. Just as Dr. Pferdt encourages us to develop a future-ready mindstate, so does our technology need to be future-ready. We need AI that understands its limitations and helps safeguard us from its own potential risks. In short, we need AI that can help protect against AI.

Professor Simon Lucey speaking at the 18th Lawtech Summit
Professor Simon Lucey speaking at the 18th Lawtech Summit

Top 6 key takeaways

  1. Be the first to jump: Embrace curiosity, optimism, experimentation, and empathy to identify opportunities and lead in an evolving industry. Encouraging innovation can inspire others and provoke real action and change.
  2. Tackle AI fatigue with education: Provide regular training on generative AI tools, focusing on effective usage (e.g., prompting), to reduce overwhelm and get the most out of these tools.
  3. Measure the ROI of AI: If you’re sceptical about the true impact of using AI tools, you can look to quantify the ROI with new software like Automatise’s Cicero, which uses AI units to measure productivity and reduced billable hours.
  4. Invest in understanding AI fundamentals: The key to getting the most out of AI is understanding its capabilities and limitations, so prioritise training on how to find and tackle errors in your AI systems and invest in building ethical AI with appropriate safeguards.
  5. Prioritise cybersecurity now: Elevate cybersecurity to a strategic priority, not just an IT responsibility. Proactively implement "secure-by-design" systems to comply with regulations and safeguard sensitive client data against growing cyber threats. Regularly review and update your firm's security protocols to anticipate and mitigate future risks.
  6. Balance AI with human judgment in decision-making: While integrating AI into workflows, maintain the human element for ethical oversight. Ensure that technology aids decision-making without replacing the critical ethical considerations and judgment of human professionals.