![]() The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information. Neural nets continue to be a valuable tool for neuroscientific research. ![]() The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs. Training data is fed to the bottom layer - the input layer - and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. When a neural net is being trained, all of its weights and thresholds are initially set to random values. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number - the sum of the weighted inputs - along all its outgoing connections. If that number is below a threshold value, the node passes no data to the next layer. It then adds the resulting products together, yielding a single number. To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item - a different number - over each of its connections and multiplies it by the associated weight. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels. Usually, the examples have been hand-labeled in advance. Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. So ideas should have the same kind of periodicity!” In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized - they get tired of it. And then there is a new generation that is ready to be infected by the same strain of virus. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. “There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department. In the past 10 years, the best-performing artificial-intelligence systems - such as the speech recognizers on smartphones or Google’s latest automatic translator - have resulted from a technique called “deep learning.”ĭeep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |