Emirates Education Platform

Dr.Sahar Hashmi -expert in digital health and AI technologies-advises CEOs and Investors…

Dr.Sahar Hashmi -expert in digital health and AI technologies-advises CEOs and Investors to focus on generative AI prevention centers 

Dr. Sahar Hashmi comments: “When teaching my Harvard grad students about openAI, we had a good discussion about chatGPT’s potential to be the next business competition for google in the near future. As generative AI applications like Stable Diffusion, DALL·E 2, chatGPT and Lensa came to a reality, it has been very exciting to observe both the pros and cons of such AI applications. Recently, Sundar Pichai (CEO of Google) sent a “red code” upon the launch of chatGPT, viewing it as a threat for Google. 

She added: “As the interest of investors (VC) in the tech world is shifting from metaverse and crypto to more practical applications of generative AI, it is important to make sure the data/information these apps provide to customers is free of bias and hallucinations. In my recent CIO-today article on AI-gatekeepers, I stress the need of establishing a generative AI bias prevention center. In a twitter post, Sam Altman (CEO of OpenAI), states that “ChatGPT is incredibly limited and it’s a mistake to use ChatGPT for important tasks.” If Google or OpenAI focuses on establishing this strong AI bias prevention center and generate more reliable, transparent chatbot responses that are backed by reliable and authentic sources then it will be a tough competition for the billionaires” 

Stop Bias Before It Starts: Why AI Gatekeepers and Generative AI applications (ChatGPT) Need to Invest in Bias Prevention Centers

By Sahar Hashmi, MD, PhD

In an era when most business work environments are constantly changing because of digital transformation, job uncertainty, labor shortages, and new pandemic-related hybrid or part-time work arrangements, AI-powered algorithmic automation has taken on the role of managerial gatekeeper.

While some business leaders suggest this is the best time to invest in AI automation and digital transformation, it is also important to invest in an innovative bias prevention center. There is a growing interest amongst tech Venture Capitalists (VC) to invest in generative AI applications. When Microsoft chatbot Tay came to market it was quickly shut down due to pushback. Many express concerns about the AI avatar Lensa’s image generation of Asian women.

Bias prevention would help high-performance organizations and tech giants to achieve their goal of making their organizations more diverse and inclusive. This is at a time when the White House recently stressed the need to make AI algorithms safer and to prevent AI biases as more businesses move to AI-based systems. With the recent launch of ChatGPT, an AI platform developed by OpenAI, where customers can get answers on just about anything from finding a simple food recipe to learning how to code a website, some comment that it is a revolutionized search engine and would be better than google.

Yet there are concerns that such an AI chatbot can lead to more harm than benefit and thus requires specialized and advanced supervision to prevent harm and bias in such advanced AI applications.

The informational data for developing apps, chatbots or automation and transformation of services requires gatekeepers. Historically, the gatekeeper is a concept Prof. Thomas Allen of MIT’s Sloan School of Management introduced in 1969, recognizing that gatekeepers are essential in guarding the information from going out of an organization as well as coming into an organization.

But today’s gatekeepers may be AI agents, specialized generative AI chatbot, powerful AI algorithms, or algorithmic automation. They are more efficient than humans, cheaper to implement, and more reliable.

Several industries use AI algorithms as gatekeepers, including banks that use AI-automated systems to screen individuals and determine their eligibility for loans and mortgages and to determine credit card limits. There are pros and cons to this approach.

AI algorithms function as gatekeepers for the bank loan system, which can very efficiently approve or reject an application within minutes. Applications are available to customers 24/7 all year round and thus can benefit both the banks and the customers.

AI-powered algorithms provide convenience to customers, lower the cost of operations, and lower the chances of mistakes. According to an Accenture report, “banks can achieve a two to five times increase in the volume of interactions or transactions with the same headcount.”

However, there are concerns for potential racial bias embedded in these algorithms. Some banks approved only 47% of applications for homeownership from Black applicants compared with 72% from white applicants, according to a Bloomberg report. If the gatekeeper algorithms are trained using biased data—for example, using a dataset with a majority of Black Americans who have lower FICO scores—there is a higher chance that members of certain low-income and racial groups may never be approved

Many hospitals and clinics are using AI algorithms to triage patients more efficiently and navigate them to the appropriate physicians, thereby using AI as a managerial gatekeeper. The Mayo Clinic, for example, used an AI patient triage system during the pandemic to determine the patients’ need to come to the hospital based on urgency. If this system is biased, certain populations may not get access to care.

Almost all of the big tech giants are using conversational AI chatbots, which serve as gatekeepers, in their retail businesses. One prominent example is Amazon, which uses chatbots to help customers track down lost and delayed packages. If their voice detection system is biased, service may be efficient for only certain groups depending on accent or race.

Since organizations and tech giants using AI-automated gatekeepers must avoid bias, there is a dire need for an innovative AI bias prevention and screening center. The center would help prevent bias from the start and would create new jobs in the industry. Attention should be focused on the algorithms with embedded human biases.

An innovative generative AI bias prevention center would include three departments. The first would be an algorithm screening, or a data screening and data bias detection department to search for bias. Data scientists need to hire and train a diverse team responsible to develop the raw data collection and screening protocols. The biases in an algorithm originate from the data used to feed and train the algorithm for certain tasks; so data needs to be screened from the start. This would be a crucial step to detect bias before it appears in the system.

Additionally, an algorithm monitoring department would develop bias detection software tools after the algorithm is developed. By using specialized responsible AI software tools capable of detecting bias in the system, there can be specialized filters that can control the image generation in apps like Lensa or detect hallucinations in applications like ChatGPT.

Finally, the third would be an algorithm testing needs human supervision department. Algorithms come with a black box that presents a hurdle, but there are creative ways to detect bias. The diverse and inclusive team needs to be tasked with designing, performing, and analyzing experiments and protocols for testing and checking on the gatekeeper algorithms. The organization’s leadership team can be part of this supervision to prevent bias in a more efficient and regulatory manner.

This continuous lifelong monitoring for bias and bugs in the software is a necessity as these specialized chatbots like ChatGPT can get biased bugs anytime which may lead to biased conversations between the chatbot and its audience.

At a time when most tasks will be automated in order to save money and provide faster, more convenient, and more efficient services, there is an urgent need for AI bias prevention centers that can make the existing gatekeepers and new specialized chatbots and apps entering the market more accepting, efficient, transparent and inclusive.

Sahar Hashmi, MD-PhD, is the CEO of Myriad Consulting LLC, and a Faculty Instructor at Harvard University. https://www.myriad-consulting-llc.com –Dr.Hashmi is a leader in AI innovative technologies, digital health and designing smart innovative systems.

Original article also published in CIOtoday with the link below:


Source link

Dr.Sahar Hashmi -expert in digital health and AI technologies-advises CEOs
1 month

Leave a Reply

We use cookies to assist you with navigation and analyze site traffic. If you continue to use this Site, you consent to our use of cookies.