Scientists say they have created a new devicethat can turn brain signals into electronic speech.The invention could one day give peoplewho have lost the ability to speaka better way of communicating than current methods.The device was developed by researchersfrom the University of California, San Francisco.Their results were recently publishedin a study in the journal Nature.Scientists created a "brain machine interface"that is implanted in the brain.The device was built to read and record brain signalsthat help control the muscles that produce speech.These include the lips, larynx, tongue and jaw.The experiment involved a two-step process.First, the researchers used a "decoder"to turn electrical brain signalsinto representations of human vocal movements.A synthesizer then turns the representationsinto spoken sentences.Other brain-computer interfaces already existto help people who cannot speak on their own.Often these systems are trained to follow eyeor facial movements of people who have learnedto spell out their thoughts letter-by-letter.But researchers say this methodcan produce many errors and is very slow,permitting at most about 10 spoken words per minute.This compares to between 100 and 150 wordsper minute used in natural speech.Edward Chang is a professor of neurologicaland member of the university's Weill Institute for Neuroscience.He was a lead researcher on the project.In a statement, he said the new two-step methodpresents a "proof of principle" with great possibilitiesfor "real-time communication" in the future."For the first time, this study demonstratesthat we can generate entire spoken sentencesbased on an individual's brain activity," Chang said.The study involved five volunteer patientswho were being treated for epilepsy.The individuals had the ability to speakand already had electrodes implanted in their brains.The volunteers were asked to read several hundred sentences aloudwhile the researchers recorded their brain activity.The researchers used audio recordings of the voice readingsto reproduce the vocal muscle movementsneeded to produce human speech.This process permitted the scientists to createa realistic "virtual voice" for each individual,controlled by their brain activity.Future studies will test the technology on peoplewho are unable to speak.Josh Chartier is a speech scientist and doctoral studentat the University of California, San Francisco.He said the research team was "shocked"when it first heard the synthesized speech results.The study reports the spoken sentenceswere understandable to hundreds of human listenersasked to write out what they heard.The listeners were able to write out 43 percentof sentences with perfect accuracy.The researchers noted that- as is the case with natural speech- listeners had the highest success rateidentifying shorter sentences.The team also reported more successsynthesizing slower speech sounds like "sh,"and less success with harder sounds like "b" or "p."Chartier admitted that much more research ofthe system will be needed to reach the goal ofperfectly reproducing spoken language.But he added: "The levels of accuracy we produced herewould be an amazing improvementin real-time communicationcompared to what's currently available."