Featured CyberDefendHER: Alexandra @ Program Manager Responsible AI at ZEISS 

Who are you and what is your work focus? 

I’m Alexandra Wander — and apparently, I’ve become known as “the AI governance lady.” I’ll take it.

My work sits at the intersection of two things that don’t always get to be in the same room: deep technical understanding of AI systems and the organizational infrastructure needed to govern them responsibly. At ZEISS, I lead the company-wide Responsible AI program — building the frameworks, policies, and cross-functional structures that allow a global precision technology company to deploy AI at speed without losing sight of what matters: compliance, privacy, security, ethics, and finally, trust. That includes aligning with the EU AI Act and making sure AI risk isn’t just a legal checkbox but an engineering discipline.

How did you get into Cybersecurity? / What excites you most about working in Cybersecurity?

I’m just transitioning into cybersecurity. I’m coming originally from the space industry. As a systems engineer in the aerospace industry, risk management is the foundational discipline of the job. You analyze failure modes, propagation paths, and mitigations systematically, across every subsystem and system as a whole, until the remaining risk is understood and acceptable. You can never just look at the parts in isolation. A satellite usually doesn’t fail because one component fails. It fails because of how components interact under conditions nobody fully anticipated. You have to hold the whole system in view, always.

When I moved into AI security, I recognized that mental model. The attack surface usually isn’t a single vulnerability,  it’s a system. Today’s threats aren’t static anymore, with agentic AI in place, they evolve rapidly. And the residual risk you accept today has to be a conscious, documented decision.

That’s what excites me most about this field: it demands exactly the kind of structured, whole-system thinking that space engineering trained me in. And it’s a rapidly evolving, innovative, highly complex technical field with an amazing team to work with at ZEISS.

What inspired you to join CyberDefendHERs?

Honestly, the combination of female empowerment and cybersecurity is one I find genuinely important, both professionally and personally. 

I’ve spent more than 15 years working in high-tech environments where I was frequently the only woman in the room: in aerospace, in AI research, in autonomous driving and now, in AI governance. I know what it costs to constantly navigate spaces that weren’t designed with you in mind. And I know how much difference it makes when someone creates a room that was.

CyberDefendHERs does exactly that: It brings together two things I care about deeply, building a more secure digital world and making sure the people doing that work reflect more of the world & society we’re trying to protect. In my personal opinion, those aren’t separate agendas. Diverse perspectives in AI & data security aren’t a nice-to-have, they’re also a structural advantage. Blind spots get covered. Governance gets more robust.

What impact would you like to make through your work?

I want to make two kinds of impact through my work:

The first is about technology. Earlier this year, reports surfaced that recordings of Meta’s Ray-Ban smart glasses were being sent to workers for AI improvements, without people’s knowledge or consent. This is a privacy violation. And followed by kind of a collective shrug. “It’s just Meta being Meta.” That normalization, the slow erosion of the expectation that technology should be safe, respectful, and trustworthy, is what responsible AI in my personal opinion exists to prevent.

That’s also why I chose ZEISS. It’s a company where the trust between employer, employee, and customer is something people actually believe in and fight to protect. ZEISS makes tools that go into operating rooms, into research labs, into people’s eyes. The expectation that their products are safe and precise is non-negotiable. I want to make sure that as AI becomes part of that ZEISS landscape, the same standard applies. Trust designed in from the start.

The second kind of impact is more personal. Recently, I hesitated to apply for a role I was clearly qualified for on paper: Looking at 23 requirements listed, I covered 21 with my experience. However, the job description listed a specific cybersecurity certification I do not hold. Despite colleagues,  men and women both, actively encouraging me to apply, I struggled. Because of the gap in my CV. And because of the gap in my confidence, if I was really the best person for the job. I have 15+ years building trustworthy intelligent systems. I have designed AI governance frameworks for a global technology company. I have brought security, legal, ethics, and engineering teams to the same table and held them there. And I nearly talked myself out of applying over a single missing credential.

That pattern, technically brilliant, externally validated, internally hesitant, is not my pattern alone. I see it also in the exceptional women I coach. I lived it for years before I learned to recognize it for what it is: not an accurate assessment of capability, but a trained response to environments that were never quite built for us.

The impact I want to make is a world where AI systems are held to the same standard as a ZEISS lens: precise, trustworthy, safe by design. And a world where the women building those systems stop shrinking and start claiming their spaces in any room.

Why do you think it’s important to share expertise in cybersecurity?

When I started my career, I worked the way I thought good research was supposed to work: alone in my office, deep in the problem, reading everything, thinking hard. I was trying to teach satellites to think for themselves, and I treated it as a solitary discipline.

Until I went to my first conference. The conversations in the breaks, the working sessions with engineers from adjacent domains, the moments where someone asked a question I hadn’t thought to ask or offered a solution from automotive — that’s where the real breakthroughs happened. Not alone in my office. In the friction between different kinds of expertise.

Cybersecurity is no different, and the stakes are higher: a single agent’s risk profile simultaneously touches training data integrity, regulatory exposure, security architecture, and organizational accountability. No single domain owns that. Security, AI engineering, legal, and ethics have to be structurally in the same room.

What concerns me most is that the AI threat landscape is evolving faster than organizations’ governance & security frameworks. The gap between what AI teams build and what security teams can evaluate is real and widening. Collective defense is the only answer that seems to be scaling. A vulnerability found in one organization is a warning signal for every organization running comparable systems.