TL;DR: Databricks Data+ AI World Tour Sydney 2025 featured thought-provoking speakers and sessions on advancing your data and AI strategies. This blog highlights insights based on how leading organisations are driving transformation and the data ecosystem needed to accelerate change.
Databricks recently brought together thousands of innovators, business leaders, and developers for its annual world tour in Sydney. The conversations and demos made it clear: AI’s experimental phase is ending. The novelty of AI has given way to the serious business of execution.
V2, as a Databricks partner, was a proud sponsor, with our booth featuring real demos of agentic AI in action, both as an advanced analytics tool and a migration partner. Both demos feature multiple AI agents working in the background, collaborating and passing tasks to achieve user goals.
This year’s event highlighted how Australian enterprises are focusing on governing, scaling, and embedding AI into core business operations.
As organisations move from hype to hard results, the question is no longer whether to adopt AI, but how to do it sustainably and effectively. Here are some key lessons from the event.

Beyond Data, Enterprise AI Projects Require an AI-Aware Data Platform
One of the strongest themes from the Databricks World Tour was that AI success isn’t determined by the size of your model but by the readiness of your data platform. Traditional data platforms were built for reporting, dashboards, and periodic batch processing. They were not designed for a world where AI agents reason over data in real time, spin up short-lived databases to complete tasks, or participate directly in business workflows.
An AI-aware platform changes that by making intelligence a native capability rather than an add-on.
For example, the Databricks demo showed how its platform features converge to create this new foundation.
LakeFlow introduces declarative, AI-supported ingestion and transformation, reducing engineering overhead while improving reliability.
Lakehouse brings structured and unstructured data into a unified store for integrated analysis, removing silos.
Lakebase layers on a Postgres-compatible transactional engine that spins up in seconds.
This means AI agents can create, manage, and retire datastores autonomously as part of their workflows. The value is threefold:
Faster time-to-value, as teams move from manual integration work to high-leverage problem solving.
Lower operational risk, as governance, lineage, security, and compliance controls are built into the platform.
Scalable AI adoption, as organisations can run analytics, BI, and agentic workloads on one governed fabric, instead of stitching together disconnected tools.
Most of the new databases being created on platforms like Neon (recently acquired by Databricks) are already spun up by AI agents. This is the new baseline for enterprise AI.
Conversational Analytics Is Becoming the New BI Entry Point
Conversational analytics, or GenBI, is technology that allows users to have natural language conversations and find answers from their data. While some organisations may choose to develop this layer over their existing disparate data sources, Databricks includes built-in functionality.
With Genie, business teams can ask questions in natural language and receive dashboards and reports, along with explanations. For example, during the event demo, a marketing lead persona asked Genie which tactics drove the most qualified leads last quarter, and received an instant narrative summary with supporting charts and tables. The user could also ask follow-up questions and give more system feedback.
For larger decisions, AI research agents can generate data-backed recommendations tied directly to source tables. This is the kind of context necessary for executive decision-making.
For enterprises, conversational BI unlocks three major shifts:
Analytics becomes accessible to everyone, and the data team stops being a bottleneck.
Faster, deeper decision cycles as follow-on questions become instantaneous, avoiding week-long request cycles.
Central governance with self-service autonomy encourages data use across the organisation without running into compliance trouble.
At V2 AI, we built our GenBI accelerator almost a year ago and have been helping several clients implement GenBI both within (connecting with Databricks Genie) and outside of Databricks.

Tried-and-Tested Governance and Technology Principles Still Apply
A recurring message from the AI demos on show was that even as AI capabilities accelerate, the foundations of successful enterprise delivery haven’t changed. In fact, they matter more than ever. Simplicity, governance, and architectural discipline are what turn AI from a promising experiment into a dependable part of core operations.
For example, National Australia Bank (NAB’s) Ada platform demo presented a practical blueprint for how large, regulated organisations can modernise responsibly. Their approach was built on clarity and discipline:
A streaming-first pipeline
A deliberately simple tech stack (Fivetran + Databricks + PowerBI)
A strong commitment to cloud-native patterns over bespoke infrastructure.
With clean, governed, and consistently flowing data, NAB is powering use cases like knowledge management, financial crime detection, and personalised customer journeys.
This “make the data layer boring” philosophy is exactly what AI needs to perform reliably. The lesson from FSI is simple: innovation accelerates when guardrails of governance, transparency, and controllability are treated as foundational.
Conclusion — Accelerating Real-World AI Deployment
OpenAI’s Head of GTM in ANZ, Satya Tammareddy, highlighted in her opening keynote that real acceleration in AI comes from bringing frontier models directly to where enterprise data already lives. Australia and New Zealand are already proving to be AI-forward markets, with strong momentum across enterprises, developers, and everyday usage.
With the technical foundations now in place, the differentiator is the business outcomes you choose to deliver with AI. The fastest path forward is simple: start with the use cases that matter, use the data you already have, and iterate until value becomes repeatable.




