I design, lead, and build AI solutions—spanning both research and production—from hands-on coding to team leadership and academic publishing. My current focus is on Responsible AI and Generative AI.
Over 7 years in academia—culminating in a tenure-track position at the Dutch National Center for Math & CS—shaped my ability to think critically and creatively about complex problems. The past years in senior industry roles have expanded that foundation, from building production-grade ML systems to leading teams and guiding strategy. Today, I thrive at the intersection of research and application, bridging cutting-edge innovation with real-world implementation. From time to time, I still provide academic service.
InSilicoTrials Technologies (NL)
INGKA Digital [IKEA] (NL)
Dutch National Center of Math & Computer Science [CWI] (NL)
TU Chalmers (SE)
Dutch National Center of Math & CS + TU Delft (NL)
National Math & CS Center + TU Delft, NL
Led the development and contributed hands-on to an LLM-based multi-agent system capable of handling different types of queries by autonomously operating tools. The architecture featured a Model Context Protocol (MCP) server coded in-house to serve several specialized tools. These included retrieval augmented generation (RAG) with sources both internal or external to the company, and functionalities of the simulation platform within which the multi-agent system was deployed, thus offering seamless integration.
Contributed first-hand to explainable AI with: (1) An award-winning algorithm and respective publications to discover mathematical equations from data (symbolic regression). Check out: SRBench at NeurIPS | SIGEVO's Best PhD Dissertation Award 2021 | Silver Award at 2021 Humies Competition | Open source repo. (2) A simple but effective algorithm to explain black-box ML models by counterfactual explanations: "how should the input change to obtain a different output?". Check out: Publication on Artificial Intelligence (Elsevier, open access) | Open source repo | Colab example.
Worked on bringing into practice and contributed to the discussion on responsible AI, such as FDA's guidance proposal on the use of AI for drugs (comment on behalf of the company). Further, led the development of a technical and actionable protocol for quantifying the level of fidelity, utility, and privacy of Generative AI-based synthetic data in healthcare. Measuring privacy risks in particular required mapping regulatory guidelines from the EU's GDPR and European Medicines Agency into appropriate metrics and simulated cyber-attacks from academic literature on synthetic data and anonymization.
Interested in having a chat or collaborating? Feel free to reach out through any of these channels or fill the form below.