TechBytes with Alyssa Simpson Rochwerger, VP of Product at Figure Eight

TechBytes with Alyssa Rochwerger, VP of Product at Figure Eight

What inspired your journey into technology?

During my first job after college, I worked at a startup in the customer service department. I talked to customers, figured out what was concerning them and tried to stop them from canceling their service. Quickly, I learned that I wanted to solve the customer problems I was hearing about daily. That desire led to me working more closely with engineering teams. Eventually, that path took me into my current career in technology.

As a woman leading Product team in the tech industry, what unique challenges and opportunities did you find in your journey? Who mentored you in this role?

I face all the common challenges that plague women in their tech careers, many of which are well-researched problems including but not limited to discrimination and bias, the “mommy tax,” sexual harassment, “queen bee syndrome,” etc. These are pervasive cultural and societal pressures that keep women out of tech, stunt careers, and bias that creeps in and holds back the advancement of underrepresented people. I’m not immune to them and have experienced all of them to various degrees, sometimes obviously and explicitly and sometimes less pervasive and more nuanced.

Along my journey, I’ve made it a point to find mentors and keep in touch with them. I was lucky enough to sit next to Tara Lemméy on an airplane, where she gave me advice I’ve carried with me ever since. I also make it a point to go to lunch or coffee with people in my day to day life whom I admire. So many people have graciously given me not only their advice but their time along my journey. These interactions can help provide a perspective from a place where I might not have found it otherwise, and that difference in point of view makes a big difference.

What message do you have for other women professionals in the Marketing Technology industry?

Even though the change toward a more inclusive workforce is happening slowly, know that even today, you belong. Remind yourself that this industry desperately wants and needs you to be part of it. As you work toward your goals, accept opportunities, know your worth and find mentors and supporters who will lift you up! Surrounding yourself with other perspectives is important no matter what industry you’re in.

Could you tell us about your recent initiatives in working with Data Science, AI and ML Teams?

All of my work serves AI and ML teams but I’m particularly proud of all the work the team did this past summer. We launched some closed beta products that clients are starting to leverage to generate real business value, and I can’t wait to launch them publicly. It’s also great to see our ML-assisted suite of tools really come together and apply ML to data annotation at scale and enable better efficiency and quality while annotating audio, text, and videos, especially when long-time clients enjoy those benefits. We have launched 15 features this year, and about two-thirds of those features are ML/AI-driven features that enhance our product offering.

What special skills and talent do you seek in humans working on your projects?

Rather than skills and talent, I seek empathy in the humans I recruit to work on our projects. People who are curious about themselves and others and like to solve problems and exhibit lots of emotional intelligence are the types of people who are real assets to teams. I also seek out individuals who display grit and aptitude for learning quickly. In my career, I’ve found it hardest to teach attitude: If you don’t have a can-do attitude or aren’t curious about the world and how to solve problems, then it’s less likely that teaching you the “hard” skills is going to provide the desired returns. At the end of the day, skills like how to be a good product manager are very teachable—I can teach you about analytics and methodologies— but I can’t teach empathy.

How does training data, specifically, helps to prevent bias in AI? How do you train data at Figure Eight?

Consider a content moderation tool that tries to identify hate speech. To create the Natural Language Processing (NLP) model for that tool, humans must first label certain terms like hate speech or not. Labeling these terms gives the model the data it needs to more easily identify uncivil language and a model that understands that can clean up almost any written communication. Essentially though, you’re teaching a model to understand new things–in this case, hate speech. Because humans are labeling the data, though, the biases each individual human has will be passed along to the model. For example – is calling someone “queer” hate speech or not? In my experience it depends on the context – and humans bring their biases when labeling data.

There are many things that organizations can do when creating training data to help mitigate bias in AI. For example, they should be transparent and open regarding what data trained the system, where it was collected, how it was labeled, what the benchmark for accuracy was and how that’s measured. Additionally, companies should understand that they’ll have different end-users who will use the system differently. Imagining what their experiences might be and building for those will help. Figure Eight uses a human-in-the-loop data annotation process, whereby humans and machines work together to annotate training data. We offer clients a diverse pool of data annotators and offer guidance for clients in order that they can best remove unintended bias from their AI initiatives.

What ethical challenges do we need to be addressing when considering AI in our business?

Researchers at the University of Washington and the University of Maryland found that doing an image search for certain jobs revealed serious under-representation and bias in results. Search “nurse,” for example, and you’d see only women. Search “CEO” and it’s all men. This is just one example, but there are plenty of real-world instances where an algorithm produced discriminatory or otherwise hurtful results. This happens because AI mirrors the biases of the humans who work on the project, from data annotation all the through the model creation and production processes.

No matter what an AI initiative is meant to do, these types of biases can creep in anywhere during the development process. It’s important that organizations are empathetic, iterate throughout the model building and tuning processes and take great care with their data in order to remove unwanted bias.

How do you prepare for an AI-centric world?

We are already in an AI-centric world. To prepare for this reality, you help people navigate by teaching them that the world is constantly changing and how to adapt to change, embrace change and see the possibility that change represents. I try to be inclusive and ensure that when things are changing that certain groups of people are not disproportionately getting the short end of the stick. It’s important to ask questions like “What will be the impact of this?” or “Who will benefit?” or “Who will this negatively impact?” Adaptability and glass-half-full approach go a long way toward preparing for a world that is already full of AI.

Which startups in the tech industry are you keenly following? What message do you have for startups looking to provide AI-as-a-service?

Recently, I’ve paid more attention to narrowly focused and specific companies, such as Ceres Imaging or Grabango or others that are applying AI and ML to a narrow use case with large benefits or new business models or efficiency gains. I think these types of companies will be very successful if they can stay focused and apply the technology at scale efficiently.

How do you inspire your people to work with technology?

I’m not sure it’s a goal of mine to inspire people to work with technology if they aren’t inclined to do so already. I do try to educate folks who may not have exposure to how impactful the technology industry is. I try to show people how it can be a career world that is accessible to them even if they don’t have a CS degree or aren’t “technologically inclined.” Technology is an industry that desperately needs people who think in alternative ways and have different backgrounds. If technology as a sector is going to grow and going to serve humanity’s best interests then it needs to become more representative. I tell people about small examples of technology working well or not working well based on who built it and I tell them how they can get into the industry if they want to.

Alyssa is a customer-driven product leader dedicated to building products that surprise, delight and bring new value to market. Her experience in scaling products from conception to large-scale ROI has been proven at both startups and large enterprises alike. As Director of Product Management at IBM Watson, Alyssa saw first-hand how thoughtful, sophisticated use of data has the power to transform industries. During her recent tenure at IBM, she oversaw the development of a large portfolio of AI products including vision, speech, emotional intelligence and machine translation.

Alyssa was born and raised in San Francisco and holds a BA in American Studies from Trinity College. When she is not geeking out on data and technology, she can be found hiking, cooking, and dining at “off the beaten path” restaurants with her labradoodle, Scout.

figure-eight logo

Figure Eight, an Appen company, combines the best of human and machine intelligence to provide high-quality annotated training data that powers the world’s most innovative machine learning and business solutions. With over a decade of experience and 10+ billion judgments, Figure Eight’s enterprise-ready data annotation platform delivers unprecedented quality and scale across a diverse set of industries and use cases.

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.