Archives For Artificial Intelligence

Last fall a lingerie retailer replaced its digital agency with an AI platform named “Albert”. The AI was tasked with identifying and then converting high-value audiences across paid search and social media marketing efforts. Albert then autonomously executed their digital marketing efforts using creative and KPIs provided by the brand.

The result? This AI approach more than tripled its ROI and increased its customer base by 30%.

Key quote:

“After seeing [Artificial Intelligence] handle our paid search and social media marketing, I would never have a human do this again.”

Source: Why Cosabella replaced its agency with AI and will never go back to humans

So this leaves us with two questions:

  1. Are you experimenting with AI and algorithms to breed efficiencies in your marketing program?
  2. Are the actions and tactics you’re personally doing at-risk to be replaced by AI? And if so, how can you lean more into creative and strategy to be ready when the tech catches up with you?

We live in amazing times.

Advertisements

And the results are fascinating

Source: Superintelligence and Public Opinion – NewCo Shift

My friend Tim Brunelle recently gave a speech on creativity and the age of A.I. and automation. He wrote a 11 min read Medium piece on it that is so rich and full of quotes, it takes a couple reads through to realize its breadth and impact.

For starters, I love his point about how idea people are agitators that can be perceived as troublemakers. This is something I’ve learned a lot about myself in the past few years…

 

As Idea People, we are also agitators.

I’ll paraphrase Robert Grudin, who describes us in his book The Grace of Great Things, “Many [Idea People] initially are seen as troublemakers simply because their vigorous and uncompromising analysis exposes problems that previously had been ignored.”

Grudin warns that, “Creativity is dangerous. We cannot open ourselves to new insight without endangering the security of prior assumptions. Creative achievement”—and that’s what I believe all of us Idea People are all about — “Creative achievement is… an adventure. It’s pleasure is not the comfort of the safe harbor, but the thrill of the reaching sail.”

So onward we sail.

Tim goes on to discuss the problem of A.I. and automation stealing our jobs, and then his simple premise that rather than resist and fight it, we should learn how to enhance our own creativity using these new tools..

So I’m curious — what if you editors, you publishers, writers and designers thought of yourselves as technologists? How might your product evolve, what new products would emerge — from curious Idea People seeking to apply the benefits of AI to the sustained, periodic shipment of words, images and motion to subscribers?

I must admit I am not a scientist. I am not a software developer. I can’t spool up an artificial intelligence on Amazon Web Services. But I can ask questions and I can learn. In learning about AI and automation I’ve found I am not afraid of the future of Idea People. I’m bullish on our abilities to derive opportunity from the evolution of technology.

I believe the long term, passionate, purposeful thinkers in this room will discover unique, robust and profitable ways to benefit from automation and artificial intelligence. If we remain curious.

I think this is cogent advice for anyone in the creative industry or a role that requires any semblance of problem solving. Embracing emerging technology and learning to use it today pays long-term dividends.

Don’t fight the A.I. Become it’s master. 

 

“The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else.”

–Eliezer Yudkowsky, A.I. theorist

 

When the architect of the world wide web speaks out about how his creation could end us all, I usually stop to listen.

On the 28th anniversary of the world wide web’s birth, Sir Tim Berners-Lee published this letter detailing what he views as the three main challenges for the web: loss of control over personal data, the spread of misinformation across the web and the need for transparency with online political advertising.

1)   We’ve lost control of our personal data

The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data, and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share – especially with third parties – the T&Cs are all or nothing.

This widespread data collection by companies also has other impacts. Through collaboration with – or coercion of – companies, governments are also increasingly watching our every move online, and passing extreme laws that trample on our rights to privacy. In repressive regimes, it’s easy to see the harm that can be caused – bloggers can be arrested or killed, and political opponents can be monitored. But even in countries where we believe governments have citizens’ best interests at heart, watching everyone, all the time is simply going too far. It creates a chilling effect on free speech and stops the web from being used as a space to explore important topics, like sensitive health issues, sexuality or religion.

2)   It’s too easy for misinformation to spread on the web

Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain.

3)   Political advertising online needs transparency and understanding

Political advertising online has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data, means that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 US election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?

Later in the letter, Berners-Lee says “I may have invented the web, but all of you have helped to create what it is today.” I think that’s extremely poignant.

Much like The Manhattan Project, we don’t always understand the full implication of our pioneering technology as they occur. Artificial intelligence (A.I.) is emerging as that next leap forward we truly don’t understand today. Rather than resist technology’s rise into our personal lives, I advocate we embrace its persistence and help guide it to the best possible outcome.

Dave Egger’s The Circle is coming out as a movie in two weeks. When I read the book in 2013 I called it the Atlas Shrugged of our digital generation. Eggers had his pulse on a very real and emergent trend related to connectivity, interaction and the subjective slippery slope of using connectivity tools for good and evil.

It will be fascinating to study how the general public reacts to the film, and in turn, how their behavior impacts awareness and outcry around the use of the topics above: 1) abuse of our personal data, 2) fake news, and 3) transparency.

Berners-Lee is a pragmatist, and a realist. But the general public is rarely either. It often takes fictionalized fantasy to help us escape our own fictionalized fantasy.

It’s been especially enjoyable to witness young people reading The Circle start to question aspects of their digital lifestyle in news ways. Like this op-ed from a student at the University of Washington:

Right now, most people have strongly opinionated answers to these questions, but after reading “The Circle,” readers are sure to have a more nuanced response. While it’s unlikely to completely change your mind, the book does an excellent job of complicating these familiar questions with new technology and perspectives.

As the influence of the internet in our lives grows, and companies and the government automatically have more access to our thoughts and lives, we have to ask ourselves where to draw the line. We need to be aware of how far people and companies are allowed to go and if we, as humans, are truly using these technologies for good; what is progress and what is too much?

I’m excited for the film, but I’m also a pragmatist and realist about how deep its impact could be.

As for me, when the architect of the world wide web speaks out about how his creation could end us all, I usually stop to listen.

 

 

Sources:
Sir Tim Berners-Lee lays out nightmare scenario where AI runs world economy | Social Media | Techworld

Three challenges for the web, according to its inventor – World Wide Web Foundation

Beyond the Page: ‘The Circle,’ by Dave Eggers — Privacy in the modern age | The Daily

The following predictions were made by Ray Kurzweil in his book The Singularity Is Near.

Kurzweil’s book Spiritual Machines significantly impacted my life and changed my career trajectory…

Ray Kurzweil predictions

Source: The dawn of the singularity, a visual timeline of Ray Kurzweil’s predictions | KurzweilAI

If you’re a fan of Sherry Turkle’s Alone Together, this will hit you in all the right places…

In many instances, the researchers observed children persistently obstructing the robot. Sometimes a child would step aside when asked by the robot, but then would quickly come back in front of it. Other children started ignoring the robot’s requests and just stood in front of it. In at least one situation (above), a child started to verbally express her intention to block the robot (“No-no”), when requested to move. Other children joined her in obstructing the robot and saying it couldn’t go through.

According to the study, “Escaping from Children’s Abuse of Social Robots,” obstruction like this wasn’t nearly the worst of it. The tots’ behavior often escalated, and sometimes they’d get violent, hitting and kicking Robovie (below). They also engaged in verbal abuse, calling the robot “bad words.” (The researchers did not disclose what bad words may have been used, but they mention that one kid called the robot “idiot” eight times.)

The researchers say they observed the children “acting violently” toward the robot in several occasions: Bending the neck, hitting with plastic bottle, hitting with ball, throwing a plastic bottle.

The Japanese group didn’t just document the bullying behavior, though; they wanted to find clever ways of helping the robot avoid the abusive situations. They started by developing a computer simulation and statistical model of the children’s abuse towards the robot, showing that it happens primarily when the kids are in groups and no adults are nearby.

Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).

via Children Beating Up Robot Inspires New Escape Maneuver System – IEEE Spectrum.

Our spaceLab team is having fun with Artificial Neural Networks…

via Do Computers Dream? | spaceLab.