Poayjay

joined 11 months ago
[–] [email protected] 1 points 1 month ago (1 children)

That’s the huge take away here. The Chinese can’t comprehend that the DOD doesn’t have a social media control division. Yes we have the NSA and stuff spying, but they don’t control anything.

[–] [email protected] 7 points 2 months ago

What about ThePicardManeuver?!

[–] [email protected] 0 points 8 months ago (4 children)

I challenge anyone who says sugar isn’t addictive to go a week without. No sugar. No sugar substitutes like fructose. I’ve done it. It is awful.

I’ve also done hard drugs. Quitting those are awful too.

The difference is that I haven’t done drugs in decades but I still have a pack of Oreos on my counter.

[–] [email protected] 30 points 10 months ago

I never realized all this but it’s so true. I browse and comment until I’m caught up, then log off.

Wow

[–] [email protected] 15 points 10 months ago (4 children)

I completely disagree. It absolutely is AI doing this. The point the article is trying to make is that the data used to train the AI is full of exclusionary hiring practices. AI learns this and carries it forward.

Using your metaphor, it would be like training AI on hundreds of excel spreadsheets that were sorted by race. The AI learns this and starts doing it too.

This touches on one of the huge ethical questions with regulating AI. If you are discriminated against in a job hunt by an AI, who’s fault is that? The AI is just doing what it’s taught. The company is just doing what the AI said. The AI developers are just giving it previous hiring data. If the previous hiring data is racist or sexist or whatever you can’t retroactively correct that. This is exactly why we need to regulate AI not just its deployment.