Development Research: AI Demystified

In these posts will address the actual mechanics, and valid concerns, underlying the automation of alumni research, and put some key misconceptions to rest.


When people hear that AI is being used in a system, one question tends to come up very quickly: “Is our data being used to train the model?”

For Development teams working with sensitive alumni information, the answer needs to be clear: No.

At AlmaReach the rule is simple: client data is never used for model training.

This applies in two directions. First, third-party LLM providers are never permitted to train on client data processed through our systems. The information used in research tasks is handled strictly for that task alone. Second, we do not use client data to train our own internal models. Even when building bespoke tools and automation, training datasets are kept entirely separate from any client information.

Why does this matter?

We want to ensure that our clients’ data remains their data. Personal data must never be absorbed into external AI systems. This prevents potential downstream exposure, safeguarding information from appearing in unrelated outputs or future model behaviour. We have clear boundaries for what data can and cannot be used for training material.

AI tools should process research tasks without turning institutional data into fuel for future models.


Modern AI systems are increasingly being given direct access to the internet. In theory, this allows models to search the web, pull information from external sources, and incorporate it into their responses in real time. Whilst powerful, this approach also raises an obvious issue:

Who is actually in control of what the AI is doing?

At AlmaReach, we take a different approach. Our LLMs never have independent web access. They cannot freely browse, search, or interact with external sources on their own. Instead, all web and directory searches are handled through controlled tools that we manage directly. When the system needs information, it requests the use of a specific tool, and that request is handled within our own controlled environment.

This gives us two important safeguards.

First, we control exactly where information is sourced. Searches and data retrieval happen through predefined pipelines rather than open-ended browsing.

Second, every tool interaction is recorded. Each time the system performs a search or accesses a dataset, that action is logged, providing a clear record of how information was gathered. These sources are cross-referenced for identity resolution and are provided to our clients to ensure transparency.

For Development research, oversight matters. Automation can accelerate work dramatically, but the underlying research process should remain transparent and controlled.


One of the most common concerns we hear about AI is simple: “If we use it, where does our data actually go?”

It’s a fair question, especially for Development teams working with sensitive alumni information.

At AlmaReach, our approach is carefully designed around a simple principle: no persistent chat history.

Each AI task in our system operates with fresh context every time. That means:
• No conversations are stored
• No ongoing chat histories are retained
• No internal data accumulates over time

Instead, when the system performs a task, it is given only the specific information required for that single operation. Once the task is complete, the context disappears.

Why does this matter?

First, this approach improves accuracy as each task is tightly scoped with clear instructions, reducing drift or confusion from long histories. Second, this gives us stronger privacy controls; without stored conversations, there is no growing archive of sensitive prompts or outputs. Third, we ensure transparency for our clients as every AI interaction is discrete, auditable, and purpose-limited.

For Development teams, this approach reflects a broader principle we believe in: AI should support and accelerate research, not replace the judgement and expertise behind it.


When organisations begin integrating AI into their workflows, one of the first questions is simple: which model should we actually use? In practice, the decision comes down to three core considerations:

Trustworthiness

Before anything else, an AI provider must meet strict standards around privacy and data handling. If a provider cannot clearly explain how prompts are handled, whether any data is stored, or whether information may be used for training, then it simply isn’t suitable for working with sensitive alumni information. Any AI integrated into the Development research process must operate within clear and accountable boundaries.

Effectiveness

AI models also need to perform reliably on the tasks they are given. That means accurately interpreting instructions, extracting relevant information, and producing consistent outputs at scale. Different models perform better on different types of tasks, which is why careful testing against real research workflows is essential.

Control

Organisations need to retain control over how AI is used within their systems. This includes deciding when a model is used and what information and tools it can access. AI should operate as a component within a structured workflow, not as an absolute decision-maker.

At AlmaReach, we continuously evaluate every model provider against these three criteria. If our current provider fails to meet our high standards, we will replace them with a more suitable choice. Please read our AI Policy on our website to see which models we are currently using in our workflow.


Much of the public conversation around AI focuses on ‘generative AI’, i.e. models that are designed to create new content (text, images, videos, etc.) based on user prompts.

For some use cases this is extremely powerful, but in Development research, generation is not the goal… accuracy is.

At AlmaReach, our AI systems are deliberately designed to be discriminative rather than generative.

What does that mean in practice?

Our models do not invent information, speculate about missing details, or generate new narratives about individuals. Instead, they operate on a much narrower task: organising and cross-referencing information retrieved from public sources. In other words, the AI helps process research inputs rather than creating new claims.

This distinction is important for several reasons:

1. It reduces the rick of hallucination. Generative systems are prone to producing plausible but incorrect statements. In Development, this may adversely influence relationship management or fundraising strategy — an unacceptable risk.

2. It preserves the role of human oversight. By limiting the AI’s remit to only structuring data and resolving identities, rather than generating novel claims, each outputted claim can be cited with a real source/sources. These can then easily be verified by a human, if desired.

This restrictive use of AI at AlmaReach compared to more flexible deep research models may sound subtle, but it is fundamental to maximising transparency, accuracy and accountability within our system.


One of the most important challenges in Development research is a simple one: are we looking at the right person?

Names alone are rarely enough to confirm identity. Alumni may share common names, change roles, move countries, or appear differently across public records and online sources.

At AlmaReach, we approach identity resolution by cross-referencing signals and claims across multiple sources. When information is gathered, our system evaluates how different data points align. Professional roles, education history, geographic location, organisational affiliations, and other contextual indicators are compared across sources to determine whether they refer to the same individual.

Sometimes we require strict matching, confirming clear identifiers such as exact name matches, or known employers and locations. Other times we allow for ‘fuzzy’ matching, recognising variations in job titles and name formatting. By combining these approaches, our system can either reinforce a match or flag inconsistencies that suggest the data may belong to someone else. This, ingesting large quantities of information and identifying inconsistencies and coherence, is where AI shines.

The goal is not to make definitive claims automatically. Instead, we build a holistic confidence assessment based on how strongly the available signals align across independent sources.

This allows researchers to review the evidence quickly, understand why a match has been suggested, and make an informed decision about whether the identity is correct.


In Development research, we often hear teams express hesitation when it comes to adopting AI. But here's the thing: they’re already using it, sometimes unconsciously... without unlocking its full potential.

Some teams have already integrated AI into their workflows. The risk? Complacency. Teams may feel they’ve “tried” AI and don’t see the need for further adoption. However, with the incredible speed at which it is improving, it’s easy to get left behind, and to not use the technology to its fullest capability.

AI is not “One-Size-Fits-All”

With the proliferation of diverse applications of AI, especially over the last year, it is naturally the case that some products are more relevant to Development than others. Early adopters in alumni settings may be using more general purpose AI technology, which is now far outperformed by specially tailored solutions.

Human Expertise Still Matters

AI doesn’t replace human insight, it complements it. By adopting AI in the right way, teams can reduce manual workloads, enhance data accuracy, freeing up time for deeper, more strategic thinking. The goal is to use AI to accelerate the research process, not replace it. A complacency driven by a fear of technological ‘replacement’ can thus hinder broader thinking in teams.


AI can be a powerful tool, but without human oversight, it’s easy for things to go wrong. This is especially true in Development research, where accuracy and context are critical. That’s why at AlmaReach, we believe in keeping a “human in the loop” at all times.

Our AI systems don’t make autonomous final decisions. While AI helps gather data, cross-reference sources, and organise information, all final judgments are made by human researchers. This ensures that each step is monitored, and every claim is backed by a verifiable source.

Why is this Important?

First, it reduces the risk of errors or “hallucinations” in the data. AI might suggest a certain approach, but the final call is made by an expert who knows the context.

Second, it preserves transparency and accountability. Every action, search, and data retrieval is logged and traceable, so teams can trust the outputs and verify every step of the process, and go back to examine them in future.

By keeping humans in the loop, AI tools can enhance research without replacing the critical thinking and expertise that drives it.


AI systems hold the promise of transforming how we conduct research, but their effectiveness depends on how fairly and accurately they process data. One of the biggest challenges with AI is bias. In Development research, where trust and integrity are paramount, bias can have serious consequences.

At AlmaReach, we take proactive steps to minimise bias in our AI systems. Here’s how we approach fairness:

1. Data Diversity: We ensure that the datasets our bespoke models work with are diverse and representative. AI can only be as fair as the data it's trained on, so we focus on using comprehensive, balanced sources that reflect the wide range of individuals and contexts we work with in Development research.

2. Bias Monitoring: We actively monitor our systems for potential biases in decision-making. Our AI processes, from identity resolution to alumni analytics, are evaluated statistically to ensure they don’t unfairly favour or disadvantage any group or individual.

3. Transparency and Accountability: Our AI models are designed to be transparent in their decision-making processes. Each data point comes with a clear record of its sources, ensuring that any potential bias can be traced and addressed by researchers.

AI’s role in Development research should enhance fairness, not hinder it. By actively managing bias and maintaining oversight, we ensure that our AI systems help rather than harm the research process.


As in most fields, AI’s capabilities for Development are evolving incredibly fast. It’s now only scratching the surface of what will be possible.

Our piece of the puzzle

Our focus is on data enrichment and analytics for alumni research with AI. Reams of public data are just waiting to be tapped. Greater understanding of alumni bases, allowing for more precise targeting of communications, events, and fundraising strategy, benefits institutions and alumni alike.

Where next?

But it doesn’t stop there. Predictive analytics powered by AI will increasingly forecast specific alumni behavior, allowing teams to proactively engage with the right alumni at the right moments. This will be possible on an industrial scale.

With AI’s continued advancement, its role in alumni relations will only expand. Now is the time to harness that power, rather than to fall behind.

What remains

Rather than displacing the human touch in Development, AI will allow it to emerge front-and-centre once again. The provision of more sophisticated data insights, as well as the automation of routine tasks, will free up time for the meaningful, personal interactions that lie at the core of our work. That is not replicable by AI.

Instead of fearing the imposition of this technology, we should instead approach it optimistically. It’s an exciting time for the industry. Unparalleled in recent decades, it offers a genuine chance to amplify our efforts, and to more deeply connect with alumni, freed from the uncertainties that have hindered Development for too long.