Making AI your own: a story of starting, doubting and following through

Many organizations want to get started with AI, but find starting difficult due to questions about data, cost, risk and value. This article shows how to make AI your own: start at work, create frameworks, build trust with transparency and human review, and take privacy seriously. Small steps make AI practical, reliable and more meaningful.


"For business leaders, key users and anyone thinking, 'We do want to get started with AI, but how do you take the first step?'

Sometime on a Monday morning. Project meeting. Someone says, "We need to do something with AI." Someone else asks, "Can we do that with our data?" Yet another: "What will this cost?" And then the final chord: "Can we show results next month?" If this sounds familiar: You're not the only one. Most organizations don't have a adoption problem , they have a starting problem. Not because of unwillingness, but because of questions, perceptions and honest uncertainty. This is a story about Making AI your own: how to get started, how to take doubts seriously, and how to persevere without getting bogged down in hype or big change programs.



AI in organizations: between trying and really using it

Technology is moving faster than our structures. AI is now in Office, CRM, help desk software and even tools you didn't know had AI in them. Colleagues are trying out all sorts of things; not because they want to be rebellious, but because they want to make their jobs easier. This sometimes leads to tool chaos and the phenomenon of shadow AI: applications out of IT's sight. Not ideal for privacy and compliance, but a signal that people see opportunities. (See a.o. IBM - What is Shadow AI? and the Cloud Security Alliance.)

At the same time, many organizations are caught in the pilot maze. A demo is easily made, but scaling up to real use proves more difficult: integrating with existing systems, getting data quality right, arranging security, making agreements about use and costs. Various studies show that many POCs never reach production (e.g. HBR, IDC via CIO.com, Gartner).

And then there is the vertrust question. In the Netherlands, we are critical. Rightly so. The Personal Data Authority calls for vigilance and clear rules of the game (AP - AI Risk Report 2024). TNO has been pointing to digital sovereignty for years: how do we maintain control over data and technology in a world dominated by foreign platforms? (TNO - news, TNO - 2024 report).

In short: there is energy, there are concerns, and there is little staying power. That's not an excuse to stand still; it's a reason to start conscious and people-centered.


Start with the work, not the tool

Making AI your own rarely starts with "which model is best?". It starts with work. What moments in a workday are slow, error-prone or just unnecessarily complicated? Where does searching, tuning or reporting take a lot of time? Those with answers to these will almost automatically find the right role for AI.

An example. Customer service wants faster and more consistent responses. You can try a hundred tools, but the real work is in:

  1. Unlocking knowledge (policies, manuals, contracts),
  2. Settle Rights (who sees what),
  3. Quality assurance (human check where it matters).

The tool then follows almost naturally. And yes: that sometimes includes the insight that not everything has to be automated. That, too, is mature usage.

For those who want numbers: much of enterprise information is unstructured (estimates run from 80-90%) and often remains untapped (40-90% "dark data"), precisely the area where modern AI can unlock value (MIT Sloan, Wharton).


Fast and responsible: the art of controlled freedom

"Can it be faster?" and "Can it be safe?" are not opposites. You can do both - as long as you frameworks creates. Consider three simple building blocks:

  1. A sandbox for experimentation A place where teams can test with real (classified) data, with logging, budget limits and clear rules. No sprawl, but pace.
  2. A small catalog of permitted AI services Rather five good options than fifty loose ones. For each option: what is, what is not, which data classes are allowed. That prevents discussions afterwards.
  3. A production path that everyone understands No thirty gates, but a clear path: integration check, security/privacy check (DPIA where necessary), monitoring, and above all: when do we stop? Stopping is mature; it keeps energy for what does work. (See, among others. HBR and Gartner.)

No confidence without numbers

AI costs are often consumption-dependent. That feels unpredictable.
Agree on limits in use, set a clear budget, and calculate in advance what happens in a best, worst and normal scenario. Show per month: what did it deliver, what did it cost, what will we adjust? Transparency builds trust, as does human-in-the-loop in decisions that impact customers or colleagues. (McKinsey identifies that policy and mitigation are often lacking; see State of AI 2023.)

Small perspective: perfectly predictable it never becomes. But unpredictable does not mean ungovernable. With boundaries, measuring points and sober reporting, you'll get a long way.


Business wants pace, IT wants control - and everyone has a point

The classic contradiction. The business sees opportunities and wants momentum. IT and security see risks and want to control. And both are right. Solution: AI as a team sport. A small governance team (business, IT, data, legal) that prioritizes, sets frameworks and removes blockages. No police, but direction.
McKinsey writes about the misconception that employees sometimes move faster than leaders think - skewing is itself a risk. (McKinsey 2025; summary at Innovation Leader).

Practical tip: put that sandbox and catalogus together nhonor. Then no one feels left out and "no" becomes more often "not yet, until we get X settled."


Dutch reality: privacy and sovereignty as preconditions

We want speed, but not at the expense of our values. In the Netherlands, privacy and autonomy are not a footnote. The EU AI Act is coming; the AP is watching critically. And we don't want to become dependent on one supplier or jurisdiction. TNO has been warning about these dependencies for some time and advocates conscious choices: EU hosting, encryption, bring-your-own-key, sometimes even a sovereign alternative (TNO - news; TNO - 2024 report).

Important: the choice of an AI model always depends on the context. There is no one model that fits everywhere. Sometimes you choose an open-source language model because of control, sometimes a commercial variant because of quality and maintenance, and sometimes a hybrid form. Freedom is nice, but responsibility carries more weight.


Just getting started, but thoughtfully

You don't have to start a big program to get started with AI. It works better to start small, create an overview and build step by step.

  1. Map what is already happening (2-4 weeks).
    What AI applications are people already using, even outside of IT? And what data is going around in them? Put this together in an overview. The goal is not to control, but to know what's going on.
  2. Choose a few concrete use cases.
    Where does time get lost or mistakes occur? Select two or three situations and agree together what an acceptable outcome would be. This way you make expectations clear and testable.
  3. Set up a secure testing environment.
    A defined environment where you can experiment with real data. Set limits on use and cost and explain in plain language: what is allowed, what can be done, and why. This allows teams to experiment without risks getting out of hand.
  4. Determine how you move from test to daily use.
    Establish the conditions for doing so: security, privacy (e.g., a DPIA), links to existing systems and, above all, clear criteria for continuing or stopping. Stopping is not failure; it prevents energy being wasted on something that does not work.
  5. Measure and share the results.
    Regularly show what an initiative delivers, what it costs, what risks emerge and what improvements you are making. Transparency helps build trust and makes value concrete.

Sound simple? That's exactly the point. Making AI your own does not require complicated pathways, but professional craftsmanship: understanding your work, getting your data in order, making clear agreements and improving.app step by step.


We often overestimate what AI can do for us tomorrow, and underestimate what we today can already manage: overview, clear choices and small, real steps. If you do that, AI is not "about" people, but works before people. That's making AI your own.

Resources