Read later
Audio version
Summary:
Big Tech has been on the defensive lately, and for good reason. What was once perceived as a way to foster democracy has given way to algorithmic dystopia. But Facebook s algorithmic dangers are tied to an ad-server-based model we must dismantle. Rant time.
Facebook is perfect for amplifying and spreading disinformation at lightning speed to global audiences. And it doesn t do this by accident. Its single motivation is to hold and grow its massive list of active members to expand its ad revenue.
To do that, you give people what they want: posting pictures of their dogs or providing a haven for QAnon conspiracy theories that have already provoked violent incidents.
Read later
Audio version
Summary:
I d had plenty of satirical fun at AI s expense. But AI has also changed my content workflow. Here s how an AI service can become an AI platform, overcome glitches, and achieve a different level of user loyalty.
A couple years ago, if you told me I d be writing a (mostly) positive post about Otter.ai, l would have been gobsmacked. The story of how Otter.ai won me over is, for me, a humble pie lesson in the practical potential of AI - and how an imaginative platform can have so much more impact than a standalone service.
Read later
Summary:
Opacity in AI used to be an academic problem - now it s everyone s problem. In this piece, I define the issues at stake, and how they tie into the ongoing discussion on AI ethics.
(frankie s - shutterstock)
Opacity in AI is a formal, academic description of what is more commonly referred to as, What the heck is the algorithm doing? It s a problem that is at the root of many ethical issues with AI.
It appears as a robust classification and ranking mechanisms, such as search engines, credit card fraud detection, market segmentation, spam filters, all used in insurance or loan qualification advertising or credit. These mechanisms of classification are calculated on computational algorithms, most often machine learning algorithms.