AI tools to help combat disinformation and hate speech are already in use - but just how effective are they? Is better disinformation tech on the horizon? Will it be strong enough to fight back against state-sponsored disinformation?
Courtesy of Refik Anadol
When artificial intelligence was envisioned during the the 1950s-60s, the goal was to teach a computer to perform a range of cognitive tasks and operations, similar to a human mind. Fast forward half a century, and AI is shaping our aesthetic choices, with automated algorithms suggesting what we should see, read, and listen to. It helps us make aesthetic decisions when we create media, from movie trailers and music albums to product and web designs. We have already felt some of the cultural effects of AI adoption, even if we aren t aware of it.
As educator and theorist Lev Manovich has explained, computers perform endless intelligent operations. Your smartphone’s keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not