The American epidemic of gun violence is terrifying for everyone. Global celebrities are no exception.
In an exclusive interview with PEOPLE, Madonna says she worries about the safety of her children. “I send my children to school with the same fear every mother in this era has,” says Madonna, 60, who has six children: Lourdes, 22, Rocco, 18; David, 13; Mercy, 13; and twin girls Estere and Stelle, 6.
“As a mother, you feel protective and responsible for all the children in the world,” she says. “It’s really scary that the once-safe spaces where we gather, worship, and learn are targets. Nobody’s safe. So, of course, as a mother, I acutely feel the worry.”On Wednesday, the singer released the highly anticipated music video for her song “God Control.” The video is a rousing, in-your-face, and politically charged call for gun violence prevention, and it is excerpted by PEOPLE above.
Alan Turing famously wrote this in his groundbreaking 1950 paper Computing Machinery and Intelligence and laid the framework for future machine learning scientists. Despite increasingly impressive specialized applications and breathless predictions, we’re still far from programs that can simulate any mind, even one much less complex than a human’s.
Perhaps the key came in what Turing said: “We hope that there is so little mechanism in the child’s brain that something like it can be easily programmed.” This seems, in hindsight, naive. Moravec’s paradox applies: things that seem like the height of human intellect, like a good stimulating chess game, are easy for machines, while simple tasks can be extremely difficult. But if children are our template for the simplest general human-level intelligence we might program, then surely it makes sense for AI researchers to study the many millions of existing examples.
This is precisely what Professor Alison Gopnik and her team at Berkeley do. They seek to answer the question: how sophisticated are children as learners? Where are children still outperforming the best algorithms, and how do they do it?
General, Unsupervised Learning
Some answers were outlined in a recent talk at the International Conference on Machine Learning. The first and most obvious difference between four-year-olds and our best algorithms is that children are extremely good at generalizing from a small set of examples. ML algorithms are the opposite: they can extract structure from huge datasets that no human could ever process, but generally, large amounts of training data are needed for good performance.
This training data usually has to be labeled, although unsupervised learning approaches are also progressing. In other words, there is often a strong “supervisory signal” coded into the algorithm and its dataset, consistently reinforcing the algorithm as it improves. Children can learn to perform various tasks with very little supervision and generalize what they’ve learned to new situations they’ve never seen before.
Even in image recognition, where ML has made great strides, algorithms require a large set of images before they can confidently distinguish objects; children may only need one. How is this achieved?
Professor Gopnik and others argue that children have “abstract generative models” that explain how the world works. In other words, children have imagination: They can ask abstract questions like, “If I touch this sharp pin, what will happen?” then, from very small datasets and experiences, anticipate the solution.
In doing so, they correctly infer the relationship between cause and effect from experience. Children know that an object will prick them unless handled with care because it’s pointy, not because it’s silver or because they found it in the kitchen. This may sound like common sense, but making this kind of causal inference from small datasets is still hard for algorithms, especially across such a wide range of situations.