To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Eva Maydell is a Bulgarian politician and a member of European Parliament. First elected to Parliament in 2014 at age 28, she was the youngest member serving at the time. In 2019, Maydell was re-elected to Parliament, where she continues to serve on the Committee on Economic and Monetary Affairs and on the Committee on Industry, Research and Energy (ITRE).
Maydell was the ITRE rapporteur for the EU AI Act, the proposed legal framework to govern the sale and use of AI in the European Union, and as such was in charge of drafting a report on the proposal of the European Commission — reflecting the opinion of ITRE members. Maydell — in consultation with outside experts and stakeholders — was also responsible for drafting compromise amendments.
Eva Maydell, member of European Parliament
Briefly, how did you get your start in AI? What attracted you to the field?
When I first became a member of the European Parliament, I was one of the few young female members of European Parliament (MEPs) that worked on tech issues. I’ve always been passionate about how Europe can better leverage the huge opportunities of tech innovation. The great thing about working on tech is that you’re always looking to the future. Having worked on cybersecurity, semiconductors and the digital agenda throughout my time in the Parliament, I knew I would find working on the AI Act incredibly interesting and be able to utilise my experience in those areas on this world first piece of regulation.
What work are you most proud of (in the AI field)?
I’m proud of the work we’ve done on the AI Act. We have laid out a common European vision for the future of this technology — one in which AI is more democratic, safe and innovative. Regulators and Parliaments naturally think about how to protect and prepare for worst-case scenarios and the risks; but I also pushed hard for competitiveness to be at the heart of this conversation. This included championing a research and open source exemption, an ambitious approach to regulatory sandboxes and aligning our work with our international partners as much as possible to reduce market frictions.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
We’re slowly but surely seeing more women in tech and AI. I have female colleagues and friends that work in tech who are incredibly talented and really driving the tech agenda. It’s great that we have that network to support each other. I have also found that I have been embraced by the AI community and it’s what makes working on this issue so interesting and enjoyable.
What advice would you give to women seeking to enter the AI field?
Just go for it! Be yourself, don’t think you have to stick to the mould or be like other people. Everyone has something unique to offer. The more women keep sharing their ideas, visions and voice, the more they will inspire other women to step into the world of tech. Whenever I speak with student groups, or young MEPs, it’s wonderful to see so many women interested in entering this field — you can feel the change taking place.
What are some of the most pressing issues facing AI as it evolves?
The greatest challenge for any politician working on tech and AI is trying to regulate and prepare for the future with accuracy. Despite all the facts, figures and research, there’s a certain element of looking into a “crystal ball.” The big issues politicians will need to address are:
Firstly, how can this technology make our economies more competitive while ensuring wider social benefit? Secondly, how do we stop AI fuelling disinformation? And thirdly, how do we set international rules to ensure AI is developed and utilized according to democratic standards?
What are some issues AI users should be aware of?
The very serious challenge posed by AI as a vehicle to accelerate the spread of disinformation and deepfakes. This is particularly important this year, given 50% of the world will go to the polls to vote. We all need to use a critical eye on the images, videos and news articles we see. As the technology improves, we need to become more vigilant to being manipulated. This is an issue I’m working on extensively right now.
What is the best way to responsibly build AI?
If we want a future in which AI improves our lives and helps solve our most pressing challenges, then there’s one key ingredient: trust. We need trust in these technologies.
We can’t afford to rest on our laurels. The AI Act doesn’t mean we’re “one and done.” We need to keep asking ourselves what’s next — and that doesn’t necessarily mean more regulation. But it does mean keeping a constant eye on the big picture — how AI and the regulation is affecting our economy, security and lives.
How can investors better push for responsible AI?
Investing in AI or any innovative technology is no different to investing in any other product. Business, banks and corporations are aware of the fact that there are significant financial merits on being a positive force in the world around us. Ultimately, scaling AI in a responsible way is more likely to sustain success, reduce financial risks and failures, and therefore, create consumer and market confidence.
Source link