Artificial intelligence recognizes skin color from X-rays, and researchers wonder why

It’s an astonishing discovery: the AI ​​only needs X-rays to deduce the subject’s skin tone.

The international research team from Australia, Canada and the United States did not expect this: after they fed X-rays of several patients with artificial intelligence, they were able to determine which race was accepted, although there is no other data from this. can be inferred.

AI recognizes skin tone without the need to infer any data

The team now wants the findings to be understood as a warning to science. It should always be kept in mind that self-learning algorithms can be used in medicine to treat people of different skin tones differently. It can happen even if the algorithms were not designed for this by their creators.

It can be described as quite frightening that the research team itself has not been able to figure out why AI was able to determine race. Already in the study design, they ensured that indirect visual features could not be used, for example, to infer data for which there were ethnic references, such as body mass index or bone density, as well as other diagnoses. The team describes the details in a paper published on the Arxiv preprint server (PDF).

AI-driven discrimination based on skin color is a reality

One could argue that unintended categorization according to skin tone is initially not a problem. In fact, inadvertent categorization can do both unintended harm. In the case of a large US hospital that became known in 2019, the AI ​​that was used to target high-risk patients in private screening programs clearly favored white patients and was given a much greater scope for private programs than black patients. Equal health.

Almost done!

Please click the link in the confirmation email to complete your registration.

Would you like more information about our newsletter? Find out more now

as such wired She stated that the preference for white patients was not intentional racial discrimination, but rather the result of a set of parameters used by AI. The insurance condition, and thus the cost recovery aspect, appears to have played a role as well. And since white patients in the United States tend to have better health insurance, the disadvantage may have arisen.

AI ethics also opens up new problems with solutions

The problem is not new to AI ethicists; It’s just one of many problems. There are already solutions. And the most promising thing is to neutralize the database on which AI training is based, so to speak. For this purpose, training will be done first so that you can check the resulting results. If the implementation is found to produce ethically questionable results, the database will be modified as necessary. The fact that these amendments raise their moral questions should not go unnoticed.

You may also be interested in it

See also  NASA versus Katrina: August 29, 2005

Leave a Reply

Your email address will not be published. Required fields are marked *