Archives For Artificial Intelligence

When the architect of the world wide web speaks out about how his creation could end us all, I usually stop to listen.

On the 28th anniversary of the world wide web’s birth, Sir Tim Berners-Lee published this letter detailing what he views as the three main challenges for the web: loss of control over personal data, the spread of misinformation across the web and the need for transparency with online political advertising.

1)   We’ve lost control of our personal data

The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data, and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share – especially with third parties – the T&Cs are all or nothing.

This widespread data collection by companies also has other impacts. Through collaboration with – or coercion of – companies, governments are also increasingly watching our every move online, and passing extreme laws that trample on our rights to privacy. In repressive regimes, it’s easy to see the harm that can be caused – bloggers can be arrested or killed, and political opponents can be monitored. But even in countries where we believe governments have citizens’ best interests at heart, watching everyone, all the time is simply going too far. It creates a chilling effect on free speech and stops the web from being used as a space to explore important topics, like sensitive health issues, sexuality or religion.

2)   It’s too easy for misinformation to spread on the web

Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain.

3)   Political advertising online needs transparency and understanding

Political advertising online has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data, means that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 US election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?

Later in the letter, Berners-Lee says “I may have invented the web, but all of you have helped to create what it is today.” I think that’s extremely poignant.

Much like The Manhattan Project, we don’t always understand the full implication of our pioneering technology as they occur. Artificial intelligence (A.I.) is emerging as that next leap forward we truly don’t understand today. Rather than resist technology’s rise into our personal lives, I advocate we embrace its persistence and help guide it to the best possible outcome.

Dave Egger’s The Circle is coming out as a movie in two weeks. When I read the book in 2013 I called it the Atlas Shrugged of our digital generation. Eggers had his pulse on a very real and emergent trend related to connectivity, interaction and the subjective slippery slope of using connectivity tools for good and evil.

It will be fascinating to study how the general public reacts to the film, and in turn, how their behavior impacts awareness and outcry around the use of the topics above: 1) abuse of our personal data, 2) fake news, and 3) transparency.

Berners-Lee is a pragmatist, and a realist. But the general public is rarely either. It often takes fictionalized fantasy to help us escape our own fictionalized fantasy.

It’s been especially enjoyable to witness young people reading The Circle start to question aspects of their digital lifestyle in news ways. Like this op-ed from a student at the University of Washington:

Right now, most people have strongly opinionated answers to these questions, but after reading “The Circle,” readers are sure to have a more nuanced response. While it’s unlikely to completely change your mind, the book does an excellent job of complicating these familiar questions with new technology and perspectives.

As the influence of the internet in our lives grows, and companies and the government automatically have more access to our thoughts and lives, we have to ask ourselves where to draw the line. We need to be aware of how far people and companies are allowed to go and if we, as humans, are truly using these technologies for good; what is progress and what is too much?

I’m excited for the film, but I’m also a pragmatist and realist about how deep its impact could be.

As for me, when the architect of the world wide web speaks out about how his creation could end us all, I usually stop to listen.

 

 

Sources:
Sir Tim Berners-Lee lays out nightmare scenario where AI runs world economy | Social Media | Techworld

Three challenges for the web, according to its inventor – World Wide Web Foundation

Beyond the Page: ‘The Circle,’ by Dave Eggers — Privacy in the modern age | The Daily

The following predictions were made by Ray Kurzweil in his book The Singularity Is Near.

Kurzweil’s book Spiritual Machines significantly impacted my life and changed my career trajectory…

Ray Kurzweil predictions

Source: The dawn of the singularity, a visual timeline of Ray Kurzweil’s predictions | KurzweilAI

If you’re a fan of Sherry Turkle’s Alone Together, this will hit you in all the right places…

In many instances, the researchers observed children persistently obstructing the robot. Sometimes a child would step aside when asked by the robot, but then would quickly come back in front of it. Other children started ignoring the robot’s requests and just stood in front of it. In at least one situation (above), a child started to verbally express her intention to block the robot (“No-no”), when requested to move. Other children joined her in obstructing the robot and saying it couldn’t go through.

According to the study, “Escaping from Children’s Abuse of Social Robots,” obstruction like this wasn’t nearly the worst of it. The tots’ behavior often escalated, and sometimes they’d get violent, hitting and kicking Robovie (below). They also engaged in verbal abuse, calling the robot “bad words.” (The researchers did not disclose what bad words may have been used, but they mention that one kid called the robot “idiot” eight times.)

The researchers say they observed the children “acting violently” toward the robot in several occasions: Bending the neck, hitting with plastic bottle, hitting with ball, throwing a plastic bottle.

The Japanese group didn’t just document the bullying behavior, though; they wanted to find clever ways of helping the robot avoid the abusive situations. They started by developing a computer simulation and statistical model of the children’s abuse towards the robot, showing that it happens primarily when the kids are in groups and no adults are nearby.

Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).

via Children Beating Up Robot Inspires New Escape Maneuver System – IEEE Spectrum.

Our spaceLab team is having fun with Artificial Neural Networks…

via Do Computers Dream? | spaceLab.