Who is stoker jeopardy




















You also agree to the Terms of Use and acknowledge the data collection and usage practices outlined in our Privacy Policy.

IBM's Watson computer ultimately proved to be too much for the humans in Jeopardy. The final Jeopardy category was 19th century novelists. My Profile Log Out. Join Discussion. Add Your Comment. IBM says the single-vendor approach to cloud computing is dead Cloud. Quantum computing: IBM just created this new way to measure the speed of quantum processors Hardware. Artificial intelligence's data problem meets AI's people problem Artificial Intelligence. Faced with rising economic and environmental costs, the deep-learning community will need to find ways to increase performance without causing computing demands to go through the roof.

If they don't, progress will stagnate. But don't despair yet: Plenty is being done to address this challenge. One strategy is to use processors designed specifically to be efficient for deep-learning calculations.

Fundamentally, all of these approaches sacrifice the generality of the computing platform for the efficiency of increased specialization. But such specialization faces diminishing returns. So longer-term gains will require adopting wholly different hardware frameworks—perhaps hardware that is based on analog, neuromorphic, optical, or quantum systems.

Thus far, however, these wholly different hardware frameworks have yet to have much impact. We must either adapt how we do deep learning or face a future of much slower progress.

Another approach to reducing the computational burden focuses on generating neural networks that, when implemented, are smaller. This tactic lowers the cost each time you use them, but it often increases the training cost what we've described so far in this article. Which of these costs matters most depends on the situation. For a widely used model, running costs are the biggest component of the total sum invested. For other models—for example, those that frequently need to be retrained— training costs may dominate.

In either case, the total cost must be larger than just the training on its own. So if the training costs are too high, as we've shown, then the total costs will be, too. And that's the challenge with the various tactics that have been used to make implementation smaller: They don't reduce training costs enough.

For example, one allows for training a large network but penalizes complexity during training. Another involves training a large network and then "prunes" away unimportant connections. Yet another finds as efficient an architecture as possible by optimizing across many models—something called neural-architecture search.

While each of these techniques can offer significant benefits for implementation, the effects on training are muted—certainly not enough to address the concerns we see in our data. And in many cases they make the training costs higher. One up-and-coming technique that could reduce training costs goes by the name meta-learning. The idea is that the system learns on a variety of data and then can be applied in many areas.

For example, rather than building separate systems to recognize dogs in images, cats in images, and cars in images, a single system could be trained on all of them and used multiple times. He and his coauthors showed that even small differences between the original data and where you want to use it can severely degrade performance. They demonstrated that current image-recognition systems depend heavily on things like whether the object is photographed at a particular angle or in a particular pose.

So even the simple task of recognizing the same objects in different poses causes the accuracy of the system to be nearly halved. Benjamin Recht of the University of California, Berkeley, and others made this point even more starkly, showing that even with novel data sets purposely constructed to mimic the original training data, performance drops by more than 10 percent.

If even small changes in data cause large performance drops, the data needed for a comprehensive meta-learning system might be enormous. So the great promise of meta-learning remains far from being realized. Another possible strategy to evade the computational limits of deep learning would be to move to other, perhaps as-yet-undiscovered or underappreciated types of machine learning.

As we described, machine-learning systems constructed around the insight of experts can be much more computationally efficient, but their performance can't reach the same heights as deep-learning systems if those experts cannot distinguish all the contributing factors.

Neuro-symbolic methods and other techniques are being developed to combine the power of expert knowledge and reasoning with the flexibility often found in neural networks.

Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools. Faced with computational scaling that would be economically and environmentally ruinous, we must either adapt how we do deep learning or face a future of much slower progress. Clearly, adaptation is preferable. A clever breakthrough might find a way to make deep learning more efficient or computer hardware more powerful, which would allow us to continue to use these extraordinarily flexible models.

If not, the pendulum will likely swing back toward relying more on experts to identify what needs to be learned. Or see the full report for more articles on the future of AI. Explore by topic. The Magazine The Institute. IEEE Spectrum. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. Join IEEE. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.

Enjoy more free content and benefits by creating an account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later , download Spectrum Collections , and participate in conversations with readers and editors.

Artificial Intelligence Topic Type News. Silicon prevails in men vs. Watch what happened: The final game started with Jennings, Watson, and the third contestant, Brad Rutter, another Jeopardy champion, each getting a bunch of answers right. But in the second half of the show, Watson, with its massively parallel hardware POWER7 processing cores and 16 terabytes of RAM , fired up its DeepQA algorithms and pulled ahead, winning some high-value questions and making only two mistakes.

Topic Type Robotics News. Topic Type Semiconductors Analysis. Related Stories. Robotics News Type Topic. Watson, was developed by 25 researchers over four years. The software runs on a supercomputer with 2, IBM Power cores, or computing brains, and 15 terabytes of memory. Deep Blue relied heavily on mathematical calculations, while Watson has to interpret human language, a far more difficult task.

With a normal question, Watson can just choose not to answer and look smarter in the process. Watson cannot respond to video or audio clues and the producers of the show agreed to omit them, just as they do for the visually or hearing impaired. Jeopardy is considered an ideal show for testing artificial intelligence because it covers such a broad range of topics and requires a mastery of natural language.

We may collect cookies and other personal information from your interaction with our website. For more information on the categories of personal information we collect and the purposes we use them for, please view our Notice at Collection.



0コメント

  • 1000 / 1000