Isn’t it dependent on the data set you train the AI on?
I saw an example on twitter where the algo was displaying high contrast part of image in the thumbnail. Which meant darker skin tones were hidden and lighter were shown. Someone pointed out how Barack Obama was hidden but Mitch McConnell was shown and called it racism and a racist algo.
The devs probably prioritized contrast but ended up getting called racist.
Sure thing. But I’m familiar with this line of thought.
Typical example is let’s use ML for parole. Oh wait, it denies parole to certain group of people (even let’s say race was specifically scrubbed from the training data set)?.. That gotta be due to the training data - court system must be racist or something.
Let step back and see whether certain class of people does have higher recidivism rate among population. Oh snap, they do! Let’s blame systemic racism.
Then since we all agree that systemic racism (and implicit bias) is a bad thing, let’s tweak our function to get favorable results out of it...
The biggest problem with all this is that someone will have to pick up the tab - in terms of money, safety, human lives, speed of progress, etc.
This moves this all conversation into an ethical plane. Should we create a privileged class to compensate perceived or objective injustice in our society? Some people say it’s ethical thing to do, so say it’s basically fascism under guise of good manners. The jury is still out :)
No, I'm not talking about things like assuming biases based on outcome. I'm talking about actual math. Let's take a trivial example in which you have two populations A and B. Population A comprises 90% of the total population and can be easily modeled using a linear equation. Population B comprises 10% of the population and requires a quadratic equation to be modeled effectively. If you use a linear equation to model everyone, then despite your objective showing that you're doing a good job (because Population A comprises the vast majority of your samples), you wouldn't realize that you're modeling Population B poorly. When people talk about ML biases and ML fairness, this is what they're talking about. Populations A and B don't have to be related to race or gender, they can be any arbitrary class. In cases where A and B are different races, it makes headline news. Regardless of how you feel though, making your model fair improves the overall quality of your results.
We need to hire an AI expert to introduce woke coefficient into each node of neural network. They certainly exist, as I’ve read an interview of one who worked at Google. Oh wait, it’s her!
You're being sarcastic, but you're actually not too far from the truth. Part of the problem is that objective functions can discriminate against classes.
comments
I saw an example on twitter where the algo was displaying high contrast part of image in the thumbnail. Which meant darker skin tones were hidden and lighter were shown. Someone pointed out how Barack Obama was hidden but Mitch McConnell was shown and called it racism and a racist algo.
The devs probably prioritized contrast but ended up getting called racist.
Typical example is let’s use ML for parole. Oh wait, it denies parole to certain group of people (even let’s say race was specifically scrubbed from the training data set)?.. That gotta be due to the training data - court system must be racist or something.
Let step back and see whether certain class of people does have higher recidivism rate among population. Oh snap, they do! Let’s blame systemic racism.
Then since we all agree that systemic racism (and implicit bias) is a bad thing, let’s tweak our function to get favorable results out of it...
The biggest problem with all this is that someone will have to pick up the tab - in terms of money, safety, human lives, speed of progress, etc.
This moves this all conversation into an ethical plane. Should we create a privileged class to compensate perceived or objective injustice in our society? Some people say it’s ethical thing to do, so say it’s basically fascism under guise of good manners. The jury is still out :)
Ps: I’m happy to listen to your examples
>hyper advanced AI
Racist
>trying to colonize Mars
White privilege and Europe centric mindset
>etc
Many such cases
This comment was deleted by original commenter.
I'm saying this as a non-Chinese US citizen.