Sign In  |  Register  |  About Walnut Creek Guide  |  Contact Us

Walnut Creek, CA
September 01, 2020 1:43pm
7-Day Forecast | Traffic
  • Search Hotels in Walnut Creek Guide

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Better Together Announces Results of Communications Agency-Led Survey on Generative AI Biases

Better Together releases a survey on biases in generative AI that advocates for responsible and equitable technologies and provides insights for using communications campaigns to address systemic biases in generative AI.

(PRUnderground) April 15th, 2024

Research Unveils Crucial Insights into Overcoming and Addressing Structural and Systemic Biases in Generative AI Technologies

Better Together, a communications agency dedicated to providing comprehensive communications services for social impact initiatives and organizations, announces the results of a new survey regarding biases in generative AI (artificial intelligence). This research sheds light on the critical issues of bias within generative AI technologies, offering actionable insights for fostering more inclusive and equitable advancements in this rapidly evolving field.

“In my experiences with generative AI, I’ve seen how biases are embedded in these technologies,” said Better Together Founder and CEO Catharine Montgomery. “When I requested an AI-generated image of Maya Angelou, the technology presented me with an older white woman, and when adapting a radio script for a Black audience, the generative AI tool resorted to stereotypical offensive language. These instances were eye-openers for me and served as a call to action. Better Together conducted this survey to highlight these challenges and find solutions. It’s a core part of our mission and our work with our clients to ensure that generative AI is built ‘with’ rather than ‘for’ distressed communities.”

Better Together’s survey uncovers a wide range of biases in generative AI, from racial and gender prejudices to socio-economic and age-related disparities, echoing broader societal concerns. The study advocates for a more responsible and ethical approach to AI innovation by addressing these biases and emphasizing the importance of diversity, equity and inclusion (DEI) in the technology sector.

This research positions Better Together at the forefront of the conversation on generative AI and DEI, highlighting the agency’s commitment to leveraging communication as a tool for social change. The findings from the survey will catalyze meaningful dialogue and action among multidisciplinary experts, including technology builders and developers, policymakers, and community members, driving the agenda for unbiased and fair AI systems.

James T. McKim, Jr., PMP, ITIL, Managing Partner of Organizational Ignition, LLC, who partnered with Better Together on this survey, remarked, “Recognizing biases in AI is the first step toward creating a more equitable future. By employing effective communication strategies and methods, AI system developers can mitigate bias’s harmful effects, paving the way for AI systems that reflect and promote diversity and fairness.”

Key Findings from the Generative AI Biases Survey Include:

  • Only 77.24 percent of respondents were aware of what generative AI is, indicating a significant gap in understanding that could influence perceptions of bias and fairness in AI technologies.
  • The survey revealed racism (221 mentions), sexism (180 mentions), and classism (156 mentions) as the top concerns among respondents, underscoring the need for AI developments to be mindful of societal inequalities.
  • Younger respondents (18-29) showed heightened sensitivity toward discriminatory content generation, with more than 60 percent expressing concerns, highlighting the importance of engaging younger demographics in discussions around AI ethics and development.
  • A notable skepticism exists toward technology companies’ commitment to DEI in generative AI tool creation, with a weighted average trust rating of 3.02 (on a scale where lower numbers indicate disagreement), suggesting an opportunity for businesses to build trust through transparency and ethical practices.
  • The study advocates for rigorous testing and validation processes by technology companies to ensure generative AI tools accurately reflect historical contexts and diverse perspectives, highlighting the role of responsible representation in generative AI development.

“Better Together is proud to lead the way in identifying and addressing the challenges of bias in generative AI,” added Montgomery. “Our survey underscores the urgent need for collaboration across sectors to ensure a diverse mix of experts and those with lived experiences are at the table to provide solutions to generative AI technologies so it reflects the diversity and richness of society.”

About Better Together

Better Together is a Black-woman-founded full-service communications agency committed to driving social impact through strategic, values-led communication initiatives. Founded on advocacy and social justice principles, the agency works with global advocacy partners to promote diversity, equity and inclusion in technology and beyond. With its survey on generative AI biases, Better Together continues to champion the development of technology that serves the greater good, ensuring a future where innovation benefits everyone.

About the Better Together Generative AI Biases Survey

Conducted by Better Together, this survey is a comprehensive study into biases within generative AI technologies by a communications agency. The research involved data collection and analysis, engaging participants from diverse backgrounds to gather insights into the prevalence and impact of biases in generative AI. The findings offer a perspective on the challenges and opportunities for creating more inclusive generative AI systems, contributing to the broader discourse on equity and fairness in technology.

For more information on Better Together and to explore the survey findings, visit https://thebettertogether.agency/biases-in-generative-ai.

The post Better Together Announces Results of Communications Agency-Led Survey on Generative AI Biases first appeared on

Press Contact

Name: Lucia Moreno
Phone: 703-643-4966
Email: Contact Us

Original Press Release.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 WalnutCreekGuide.com & California Media Partners, LLC. All rights reserved.