Practise English listening 27 - Science & Technology - Thấm Tâm Vy

pdf 1 trang thaodu 4040
Bạn đang xem tài liệu "Practise English listening 27 - Science & Technology - Thấm Tâm Vy", để tải tài liệu gốc về máy bạn click vào nút DOWNLOAD ở trên

Tài liệu đính kèm:

  • pdfpractise_english_listening_27_science_technology_tham_tam_vy.pdf
  • mp3Medical Technology.mp3

Nội dung text: Practise English listening 27 - Science & Technology - Thấm Tâm Vy

  1. PRACTISE ENGLISH LISTENING 27 The principle proved, Dr Chang and his team went on to show that their system Science & technology could synthesise speech even when a volunteer mimed sentences, rather than Medical Technology speaking them out loud. Although the accuracy was not as good, this is an A REAL BRAIN WAVE important further step. A practical device that might serve the needs of people like Hawking would need to respond to brain signals which moved few or no How to give voice to the speechless muscles at all. Of the many memorable things about Stephen Hawking, perhaps the most Miming is a stepping stone to that. The team have also shown that the memorable of all was his conversation. The amyotrophic lateral sclerosis that relationship between brain signals and speech is sufficiently similar from person confined him to a wheelchair also stopped him talking, so instead a computer to person for their approach to be employed to create a generic template that a synthesised what became a world-famous voice. user could finetune. That, too, will ease the process of making the technique It was, though, a laborious process. Hawking had to twitch a muscle in his practical. cheek to control a computer that helped him build up sentences, word by word. So far, Dr Chang has worked with people able to speak normally. The next Others who have lost the ability to speak because of disease, or a stroke, can stage will be to ask whether his system can work for those who cannot speak. similarly use head or eye movements to control computer cursors to select letters There is reason for cautious optimism here. What Dr Chang is doing is and spell out words. But, at their best, users of these methods struggle to produce analogous to the now well-established field of using brain-computer interfaces to more than ten words a minute. That is far slower than the average rate of natural allow paralysed individuals to control limb movements simply by thinking about speech, around 150 words a minute. what it is they want to do. A better way to communicate would be to read the brain of a paralysed person Restoring speech is a more complex task than moving limbs—but sufficiently directly and then translate those readings into synthetic speech. And a study similar in principle to give hope to those now in a position similar to that once published in Nature this week, by Edward Chang, a neurosurgeon at the endured by the late Dr Hawking. [The Economist, UK, April 27, 2019] University of California, San Francisco, describes just such a technique. Speaking requires the precise control of almost 100 muscles in the lips, jaw, tongue and throat to produce the characteristic breaths and sounds that make up Notes: sentences. By measuring the brain signals that control these vocal-tract muscles, - amyotrophic lateral sclerosis: chứng xơ cứng teo cơ cột bên (cột xương Dr Chang has been able to use a computer to synthesise speech accurately. sống) The volunteers for Dr Chang’s study were five people with epilepsy who had - epilepsy: bệnh động kinh had electrodes implanted into their brains as part of their treatment. He and his - stepping stone: bàn đạp (để tiến xa) colleagues used these electrodes to record the volunteers’ brain activity while those volunteers spoke several hundred sentences out loud. Specifically, the researchers tracked activity in parts of the brain responsible for controlling the muscles of the vocal tract. To convert those signals into speech they did two things. First, they trained a computer program to recognise what the signals meant. They did this by feeding the program simultaneously with output from the electrodes and with representations of the shapes the vocal tract adopts when speaking the test sentences—data known from decades of study of voices. Then, when the program had learned the relevant associations, they used it to translate electrode signals into vocal-tract configurations, and thus into sound. Thẩm Tâm Vy’s Archives PRACTISE ENGLISH LISTENING - LESSON 27