New ‘Inception’ laws to protect people from chilling Mind Control
Posted on https://todayuknews.com
on May 7, 2021
CHILE is set to become the first country in the world to unveil a new ‘inception’ law that will protect people from MIND CONTROL tech.
Guido Giradi, a senator in court, is leading moves to ensure citizens “neuro rights” are enshrined by law.
The politician’s announcement comes in response to advancements of chilling new technology capable of compromising “fundamental human autonomy.”
It’s a prediction that mirrors the plot of Hollywood films such as Inception, in which Leonardo DiCaprio plays a thief who steals information by infiltrating his targets’ subconscious.
Girardi claimed that the science, if unregulated, could threaten “the essence of humans, their autonomy, their freedom and their free will”.
He added: “If this technology manages to read [your mind], before you’re aware of what you’re thinking, it could write emotions into your brain: life stories that aren’t yours.”
Girardi’s proposal received unanimous support in parliament last year and is now being considered as part of a constitutional rewrite which his office hopes will be adopted later this year.
The impetus has been the rapid advances in technology over the past decade.
Much of it is driven by efforts to beat disorders such as Parkinson’s disease and epilepsy over the past decade, meaning researchers have been testing methods to access and manipulate brain activity.
‘This is not science fiction,’ say scientists pushing for ‘neuro-rights’
By Avi Asher-Schapiro
Posted on https://www.reuters.com
on DECEMBER 3, 2020
(Thomson Reuters Foundation) – Scientific advances from deep brain stimulation to wearable scanners are making manipulation of the human mind increasingly possible, creating a need for laws and protections to regulate use of the new tools, top neurologists said on Thursday.
A set of “neuro-rights” should be added to the Universal Declaration of Human Rights adopted by the United Nations, said Rafael Yuste, a neuroscience professor at New York’s Columbia University and organizer of the Morningside Group of scientists and ethicists proposing such standards.
Five rights would guard the brain against abuse from new technologies – rights to identity, free will and mental privacy along with the right of equal access to brain augmentation advances and protection from algorithmic bias, the group says.
“If you can record and change neurons, you can in principle read and write the minds of people,” Yuste said during an online panel at the Web Summit, a global tech conference.
“This is not science fiction. We are doing this in lab animals successfully.”
Neurotechnology has the potential to alter the mechanisms that make people human, so putting it in a “human rights framework” is appropriate, he added.
The U.N.’s declaration, which laid the groundwork for international human rights, was adopted after World War II in 1948.
A need for neuro-rights will grow as the developments become more popular and commercialized, the neurologists said.
Many of these technologies so far have applications in medicine, such as brain-computer interfaces helping patients move prosthetic limbs or communicate after a brain injury.
But those neurotechnologies increasingly will be available outside of the medical context, said John Krakauer, a professor of neurology and neuroscience at Johns Hopkins University in Maryland.
“Deep down what people want is consumer technologies,” he said.
The U.S. Food and Drug Administration has approved deep brain stimulation procedures – implanting electrodes in the brain – to treat a range of disorders from Parkinson’s disease to epilepsy.
Some private companies sell wearable devices to monitor brain activity that claim to be capable of tracking moods and emotions.
Krakauer compared the latest neurotechnologies to advances such as social media and mass advertising that can be utilized to alter people’s preferences without their expressed consent.
“What’s changed now is that the tech can get under the skull and get at our neurons,” he said.
Globally, a number of legal measures are aimed at these advances, including legislation in Chile that if passed would be the first law to establish neuro-rights for citizens.
In November, the Spanish government proposed new rules for regulating artificial intelligence that includes specific provisions for neuro-rights, Yuste said.
“This is the first time in history that humans can have access to the contents of people’s minds,” he said.
“We have to think very careful about how we are going to bring this into society.”
Genetically engineered ‘Magneto’ protein remotely controls brain and behaviour
By Mo Costandi
Posted on https://www.theguardian.com
on March 24, 2016
“Badass” new method uses a magnetised protein to activate brain cells rapidly, reversibly, and non-invasively.
Researchers in the United States have developed a new method for controlling the brain circuits associated with complex animal behaviours, using genetic engineering to create a magnetised protein that activates specific groups of nerve cells from a distance.
Understanding how the brain generates behaviour is one of the ultimate goals of neuroscience – and one of its most difficult questions. In recent years, researchers have developed a number of methods that enable them to remotely control specified groups of neurons and to probe the workings of neuronal circuits.
The most powerful of these is a method called optogenetics, which enables researchers to switch populations of related neurons on or off on a millisecond-by-millisecond timescale with pulses of laser light. Another recently developed method, called chemogenetics, uses engineered proteins that are activated by designer drugs and can be targeted to specific cell types.
Although powerful, both of these methods have drawbacks. Optogenetics is invasive, requiring insertion of optical fibres that deliver the light pulses into the brain and, furthermore, the extent to which the light penetrates the dense brain tissue is severely limited. Chemogenetic approaches overcome both of these limitations, but typically induce biochemical reactions that take several seconds to activate nerve cells.
The new technique, developed in Ali Güler’s lab at the University of Virginia in Charlottesville, and described in an advance online publication in the journal Nature Neuroscience, is not only non-invasive, but can also activate neurons rapidly and reversibly.
Several earlier studies have shown that nerve cell proteins which are activated by heat and mechanical pressure can be genetically engineered so that they become sensitive to radio waves and magnetic fields, by attaching them to an iron-storing protein called ferritin, or to inorganic paramagnetic particles. These methods represent an important advance – they have, for example, already been used to regulate blood glucose levels in mice – but involve multiple components which have to be introduced separately.
The new technique builds on this earlier work, and is based on a protein called TRPV4, which is sensitive to both temperature and stretching forces. These stimuli open its central pore, allowing electrical current to flow through the cell membrane; this evokes nervous impulses that travel into the spinal cord and then up to the brain.
Güler and his colleagues reasoned that magnetic torque (or rotating) forces might activate TRPV4 by tugging open its central pore, and so they used genetic engineering to fuse the protein to the paramagnetic region of ferritin, together with short DNA sequences that signal cells to transport proteins to the nerve cell membrane and insert them into it.
When they introduced this genetic construct into human embryonic kidney cells growing in Petri dishes, the cells synthesized the ‘Magneto’ protein and inserted it into their membrane. Application of a magnetic field activated the engineered TRPV1 protein, as evidenced by transient increases in calcium ion concentration within the cells, which were detected with a fluorescence microscope.
Next, the researchers inserted the Magneto DNA sequence into the genome of a virus, together with the gene encoding green fluorescent protein, and regulatory DNA sequences that cause the construct to be expressed only in specified types of neurons. They then injected the virus into the brains of mice, targeting the entorhinal cortex, and dissected the animals’ brains to identify the cells that emitted green fluorescence. Using microelectrodes, they then showed that applying a magnetic field to the brain slices activated Magneto so that the cells produce nervous impulses.
To determine whether Magneto can be used to manipulate neuronal activity in live animals, they injected Magneto into zebrafish larvae, targeting neurons in the trunk and tail that normally control an escape response. They then placed the zebrafish larvae into a specially-built magnetised aquarium, and found that exposure to a magnetic field induced coiling manouvres similar to those that occur during the escape response. (This experiment involved a total of nine zebrafish larvae, and subsequent analyses revealed that each larva contained about 5 neurons expressing Magneto.)
In one final experiment, the researchers injected Magneto into the striatum of freely behaving mice, a deep brain structure containing dopamine-producing neurons that are involved in reward and motivation, and then placed the animals into an apparatus split into magnetised a non-magnetised sections. Mice expressing Magneto spent far more time in the magnetised areas than mice that did not, because activation of the protein caused the striatal neurons expressing it to release dopamine, so that the mice found being in those areas rewarding. This shows that Magneto can remotely control the firing of neurons deep within the brain, and also control complex behaviours.
“Previous attempts [using magnets to control neuronal activity] needed multiple components for the system to work – injecting magnetic particles, injecting a virus that expresses a heat-sensitive channel, [or] head-fixing the animal so that a coil could induce changes in magnetism,” he explains. “The problem with having a multi-component system is that there’s so much room for each individual piece to break down.”
“This system is a single, elegant virus that can be injected anywhere in the brain, which makes it technically easier and less likely for moving bells and whistles to break down,” he adds, “and their behavioral equipment was cleverly designed to contain magnets where appropriate so that the animals could be freely moving around.”
‘Magnetogenetics’ is therefore an important addition to neuroscientists’ tool box, which will undoubtedly be developed further, and provide researchers with new ways of studying brain development and function.
Wheeler, M. A., et al. (2016). Genetically targeted magnetic control of the nervous system. Nat. Neurosci., DOI: 10.1038/nn.4265
Meet 10 Companies Working On Reading Your Thoughts (And Even Those Of Your Pets)
by Cathy Hackl
Posted on www.forbes.com
on June 21, 2020
Philosopher John Locke said, “I have always thought the actions of men the best interpreters of their thoughts.” Locke lived during the Age of Enlightenment. He probably wasn’t thinking about human machine actions during his philosophical ponderings. But what does it mean when machine actions are the result of human thoughts? No longer part of science fiction, many would argue that brain-machine and brain-computer interfaces are the next way we will communicate with machines and even with one another.
Brain-machine interfaces (BMI) and brain-computer interfaces (BCI) are devices that enable direct communication between a brain and an external device. BCIs let someone type onto a screen – without a keyboard. Brain-machine interfaces make it possible for amputees to move robotic limbs. BCIs can be as intricate as placing devices directly on the brain or via devices that communicate directly to machines without invasive surgery.
This type of technology opens a whole world of business applications. From dangerous jobs that already utilize robots to manufacturing, and even the consumer space. Brain-machine interfaces create a new way for humans to interact with technology, whether it be their smartphones, smart speakers, voice assistants, cars, and even each other. Startups and established companies alike realize the promise of brain-machine interfaces. They are racing to link humans to tech and machines, allowing humans to control digital technology using only their minds, which in turn opens up a whole new world of opportunities for businesses and brands to reach the customer of the future.
Here are 10 companies that are working on connecting our brains or actions to our machines and creating the future of input.
Neurable’s mission is very exciting. Ramses Alcaide, founder of Neurable, got the idea for helping people with technology when he was a kid after his uncle lost his legs in a trucking accident. Alcaide said, “the idea of developing technology for people who are differently abled has been my big, never-ending quest.” Neurable launched onto the brain-computer interface scene in 2017 at SIGGRAPH with a proof of concept BCI game called “The Awakening.” Users put on a VR headset to escape from a room with only their minds. In December 2019, Neurable raised a $6 million Series A round to develop an everyday consumer based brain-computer interface in the form of headphones.
Alcaide sees neurotechnology built into a pair of headphones as the first step towards a BCI for consumers. Think about stopping, starting, or skipping songs with your mind without ever touching your phone. Interacting with smart devices with just our thoughts through a headphone-like device is impressive enough on its own. For Neurable, it’s the data behind the interactions that show the real value of BCIs.
Cognitive analytics are, “measures of different mental states, especially those aligned with performance.” BCI enabled headphones could help a person, “enter their desired emotion and then have a customized playlist generated to provoke that response.” Not to mention open a whole new world of metrics for marketers, training, health professionals, and a variety of other industries. Alcaide believes computing is going to become more spatial. He said, “As it continues to go down that path, we need forms of interaction that enable us to more seamlessly interact with our technology.”
MindX believes the next frontier in computing is a direct link from the brain to the digital world. They’re creating this link by “combining neurotechnology, augmented reality and artificial intelligence to create a “look-and-think” interface for next-generation spatial computing applications.” Part of spatial computing, is being able to interact with computers beyond a two-dimensional screen.
MindX uses smart glasses to create a link between human brains and technology. Julia Brown, MindX’s CEO, said smart glasses will let wearers access information with a single thought. Glasses connect to the mind from eye movements. Brain waves signal back what the wearer is thinking and where they are looking. BCI enabled smart glasses opens a world of opportunities for visual search. Think about your lost car keys and the smart glasses can locate them. Wonder what someone is wearing and get the brand and link to places to buy from the glasses – all with a thought.
While some brain-computer interface companies focus on understanding the brain and cognitive metrics, others focus on real-time device control. NextMind, headquartered in Paris, France, uses a non-invasive BMI “that translates brain signals instantly from the user’s visual cortex into digital commands for any device in real-time.” NextMind debuted their device at CES 2020. Visitors to the booth demoed changing channels on a TV with just their thoughts.
Users wear NextMind on the back of their heads. It “creates a symbiotic connection with the digital world” by combining neural networks and neural signals. The Next Mind SDK is open to developers. They’re at a price point that the industry believes consumers are ready for the next phase in computer interaction.
Neurosity’s goal is to help developers get focused faster and stay focused longer. Notion (Neurosity’s thought-powered computer) has eight sensors as part of an EEG headset. In their demo, a woman scrolls through a recipe on her tablet while cooking. In another, a man changes the lighting in the room with his mind. The Notion brain sensor can be pre-ordered. The device touts its secure design saying, “it never stores your brainwaves.” Something to look out for in a BCI.
Neurosity launched dev kits in 2019. The Neurosity developer community is one of the signs that brain-computer interfaces have arrived. Developers can write apps for Notion’s brain sensor, which is developed to do two things: to detect human intent and to quantify the self. Think of it like wearing a fitness tracker for the brain. In April 2020, Neurosity temporarily cut the price of Notion pre-orders to $799 for developers. Neurosity pledges their support to “developers interested in helping quantify the human mind even further” by making their team available to brainstorm, code, and deploy neuro apps.
Kernel is a neurotechnology company based in Los Angeles, California. Their aim is to create “a brain interface that develops real-world applications of high-resolution brain activity.” Kernal’s founder and CEO, Bryan Johnson, believes in a world where people are empowered by technology, not limited by it. He sees neuroscience, specifically Kernal’s “neuroscience as a service (NaaS)” as a way to get there. Kernel was featured in I Am Human, a 2020 award-winning documentary about the “the scientists and entrepreneurs on a quest to unlock the secrets of the brain.”
Kernel created two different experiments with their technology. One is Speller which allows participants to type with only their gaze and a visual keyboard. The other is Sound ID that can decode song IDs based on the brain signals from the listener. These experiments show that with just a helmet, brain scientists can run the same type of experiments as those in labs with room size equipment. With the use of a helmet, brain scientists can study thousands of more people than they can currently. Johnson believes this can help people who have suffered strokes and are unable to speak or those dealing with mental disorders.
There are so many applications when it comes to brain-machine interfaces and neuroscience. Nectome’s technology is developed to preserve human memory by studying how the brain physically creates memories. Nectome isn’t just creating a BMI for the present. They’re hoping to change how people “preserve the languages, cultures, and wisdom of the past, and how health care engages with individuals’ memories and personal narratives.”
President of Y Combinator, Sam Altman, is “one of 25 people who have put down a $10,000 refundable deposit to join a waiting list at Nectome.” The only catch is, Nectome needs a living brain to capture the memories. The procedure kills the patient. “Nectome planned to test it with terminally ill volunteers in California, which permits doctor-assisted suicide for those patients.” Sam said of the procedure, “I assume my brain will be uploaded to the cloud.” The startup has faced some setbacks but seems to still be in operation.
Eventually, Nectome believes their biological preservation techniques will be like an episode from Amazon’s Upload TV series. At the end of their life, patients can choose to “upload” themselves into a digital afterlife.
CRTL-Labs uses non-invasive neural interfaces to “expand human bandwidth”. CRTL-Labs recreates the “0s and 1s” of neurons by listening to muscle twitches. They send the signals into machine learning to decode a person’s intention. This network is fed back to the wearer to create a symbiotic relationship. Thomas Reardon, CEO of CRTL-Labs said, “AI and Machine Learning can be dominated by us.” CRTL-Labs does all of this with a wristband.
Facebook’s leadership have talked about a new type of interface “that includes work around direct brain interfaces that are going to, eventually, one day, let you communicate using only your mind.” In September 2019, they bought CRTL-Labs and have said the following about the acquisition, “‘The goal is to eventually make it so that you can think something and control something in virtual or augmented reality.’”
Neuralink is a company owned by none other than Elon Musk. The man who made electric cars cool (Tesla) and sends astronauts to space is his own spacecraft (SpaceX) also wants to connect humans to machines. Neuralink takes a slightly different approach to brain-machine interfaces by placing “threads” into the brain. Elon Musk “wants his brain implants to stop humans being outpaced by artificial intelligence.”
Neuralink threads are connected to a 4mm chip called the N1. The chips are “placed close to important parts of the brain and are able to detect messages as they are relayed between neurons, recording each impulse and stimulating their own.” The chip connects to a wireless device worn over the ear which is Bluetooth enabled. Currently, the chip is placed via traditional brain surgery but Musk envisions the chip will be inserted virtually painlessly in the future. Applications for the Neuralink are endless – from treating neurological disorders to replacing language, and eventually turning humans into cyborgs.
Paradromics developed brain-computer interface technology to help those disconnected from the world by mental illness, paralysis, or other types of brain disorders. Paradromics believes they can meet medical challenges with technical solutions. That is, a high-data rate brain-computer interface. Similar to the Neuralink, Paradromics places electrode arrays on the brain. They do this with a “computer chip that plugs into a part of the brain called the cortex.” With Paradromics technology, mental disorders and injuries no longer have to be debilitating. They can connect those affected back to the world.
Brain-computer interfaces aren’t just for people. Zoolingua, owned by Con Slobodchikoff, wants people to understand dogs. Their device will allow both dog and human to communicate in both directions. The translating dog collar from the movie UP is coming to real life. Zoolingua bases their technology on research. “Observing (through video) dog vocalizations and behavior in specific contexts; classifying the complex forms of communication that occur; and working with computer programs to effectively and accurately decode and translate into US-English.”
According to an Amazon report, “advances in AI and machine learning will enable companies to make devices that can accurately translate a cat’s meows and a dog’s barks into English.” William Higham, co-author of the report, “believes devices that can talk dog could be less than 10 years away.”
Separate from Zoolingua, another example of someone working on decoding Fido’s thoughts is Dr. Gregory Berns from Emory University. He is a neuroscientist who’s also interested in what dogs think. Dr. Berns developed a “go/no-go” test to scan dog brains in M.R.I machines. The results show “dogs use corresponding parts of their brain to solve similar tasks as people do.” This isn’t something seen in non-primates before.
What Comes Next
Neuroscience technology is a quickly developing field. It’s one with endless applications for understanding the brain, unlocking human potential, and preserving today’s minds for the future. Some of the companies listed above are working towards specific use cases. Some use direct-brain sensors while others use non-invasive devices.
What each brain-machine interface company has in common is that they see the world as a connected place. It’s going to become even more so. The future of computing is beyond two-dimensional interactions. It’s more than voice, facial recognition, artificial intelligence, and augmented reality.
It’s all these things coming together under the power of the human brain.
The Pentagon’s Push to Program Soldiers’ Brains
by Michael Joseph Gross
Posted on https://www.theatlantic.com
in NOVEMBER 2018 ISSUE
The military wants future super-soldiers to control robots with their thoughts.
I. Who Could Object?
“Tonight I would like to share with you an idea that I am extremely passionate about,” the young man said. His long black hair was swept back like a rock star’s, or a gangster’s. “Think about this,” he continued. “Throughout all human history, the way that we have expressed our intent, the way we have expressed our goals, the way we have expressed our desires, has been limited by our bodies.” When he inhaled, his rib cage expanded and filled out the fabric of his shirt. Gesturing toward his body, he said, “We are born into this world with this. Whatever nature or luck has given us.”
His speech then took a turn: “Now, we’ve had a lot of interesting tools over the years, but fundamentally the way that we work with those tools is through our bodies.” Then a further turn: “Here’s a situation that I know all of you know very well—your frustration with your smartphones, right? This is another tool, right? And we are still communicating with these tools through our bodies.”
And then it made a leap: “I would claim to you that these tools are not so smart. And maybe one of the reasons why they’re not so smart is because they’re not connected to our brains. Maybe if we could hook those devices into our brains, they could have some idea of what our goals are, what our intent is, and what our frustration is.”
So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”
He explained, “With all of this knowledge, what we’re trying to do is build new medical devices, new implantable chips for the body that can be encoded or programmed with all of these different aspects. Now, you may be wondering, what are we going to do with those chips? Well, the first recipients of these kinds of technologies will be the paralyzed. It would make me so happy by the end of my career if I could help get somebody out of their wheelchair.”
Sanchez went on, “The people that we are trying to help should never be imprisoned by their bodies. And today we can design technologies that can help liberate them from that. I’m truly inspired by that. It drives me every day when I wake up and get out of bed. Thank you so much.” He blew a kiss to the audience.
The mission is to make human beings something other than what we are, with powers beyond the ones we’re born with.
A year later, Justin Sanchez went to work for the Defense Advanced Research Projects Agency, the Pentagon’s R&D department. At darpa, he now oversees all research on the healing and enhancement of the human mind and body. And his ambition involves more than helping get disabled people out of their wheelchair—much more.
darpa has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object? But even if this claim were true, such changes would have extensive ethical, social, and metaphysical implications. Within decades, neurotechnology could cause social disruption on a scale that would make smartphones and the internet look like gentle ripples on the pond of history.
Most unsettling, neurotechnology confounds age-old answers to this question: What is a human being?
II. High Risk, High Reward
In his 1958 State of the Union address, President Dwight Eisenhower declared that the United States of America “must be forward-looking in our research and development to anticipate the unimagined weapons of the future.” A few weeks later, his administration created the Advanced Research Projects Agency, a bureaucratically independent body that reported to the secretary of defense. This move had been prompted by the Soviet launch of the Sputnik satellite. The agency’s original remit was to hasten America’s entry into space.
During the next few years, arpa’s mission grew to encompass research into “man-computer symbiosis” and a classified program of experiments in mind control that was code-named Project Pandora. There were bizarre efforts that involved trying to move objects at a distance by means of thought alone. In 1972, with an increment of candor, the word Defense was added to the name, and the agency became darpa. Pursuing its mission, darpa funded researchers who helped invent technologies that changed the nature of battle (stealth aircraft, drones) and shaped daily life for billions (voice-recognition technology, GPS devices). Its best-known creation is the internet.
The agency’s penchant for what it calls “high-risk, high-reward” research ensured that it would also fund a cavalcade of folly. Project Seesaw, a quintessential Cold War boondoggle, envisioned a “particle-beam weapon” that could be deployed in the event of a Soviet attack. The idea was to set off a series of nuclear explosions beneath the Great Lakes, creating a giant underground chamber. Then the lakes would be drained, in a period of 15 minutes, to generate the electricity needed to set off a particle beam. The beam would accelerate through tunnels hundreds of miles long (also carved out by underground nuclear explosions) in order to muster enough force to shoot up into the atmosphere and knock incoming Soviet missiles out of the sky. During the Vietnam War, darpa tried to build a Cybernetic Anthropomorphous Machine, a jungle vehicle that officials called a “mechanical elephant.”
One aspiration: the ability, via computer, to transfer knowledge and thoughts from one person’s mind to another’s.
The diverse and sometimes even opposing goals of darpa scientists and their Defense Department overlords merged into a murky, symbiotic research culture—“unencumbered by the typical bureaucratic oversight and uninhibited by the restraints of scientific peer review,” Sharon Weinberger wrote in a recent book, The Imagineers of War. In Weinberger’s account, darpa’s institutional history involves many episodes of introducing a new technology in the context of one appealing application, while hiding other genuine but more troubling motives. At darpa, the left hand knows, and doesn’t know, what the right hand is doing.
The agency is deceptively compact. A mere 220 employees, supported by about 1,000 contractors, report for work each day at darpa’s headquarters, a nondescript glass-and-steel building in Arlington, Virginia, across the street from the practice rink for the Washington Capitals. About 100 of these employees are program managers—scientists and engineers, part of whose job is to oversee about 2,000 outsourcing arrangements with corporations, universities, and government labs. The effective workforce of darpa actually runs into the range of tens of thousands. The budget is officially said to be about $3 billion, and has stood at roughly that level for an implausibly long time—the past 14 years.
The Biological Technologies Office, created in 2014, is the newest of darpa’s six main divisions. This is the office headed by Justin Sanchez. One purpose of the office is to “restore and maintain warfighter abilities” by various means, including many that emphasize neurotechnology—applying engineering principles to the biology of the nervous system. For instance, the Restoring Active Memory program develops neuroprosthetics—tiny electronic components implanted in brain tissue—that aim to alter memory formation so as to counteract traumatic brain injury. Does darpa also run secret biological programs? In the past, the Department of Defense has done such things. It has conducted tests on human subjects that were questionable, unethical, or, many have argued, illegal. The Big Boy protocol, for example, compared radiation exposure of sailors who worked above and below deck on a battleship, never informing the sailors that they were part of an experiment.
Last year I asked Sanchez directly whether any of darpa’s neurotechnology work, specifically, was classified. He broke eye contact and said, “I can’t—We’ll have to get off that topic, because I can’t answer one way or another.” When I framed the question personally—“Are you involved with any classified neuroscience project?”—he looked me in the eye and said, “I’m not doing any classified work on the neurotechnology end.”
If his speech is careful, it is not spare. Sanchez has appeared at public events with some frequency (videos are posted on darpa’s YouTube channel), to articulate joyful streams of good news about darpa’s proven applications—for instance, brain-controlled prosthetic arms for soldiers who have lost limbs. Occasionally he also mentions some of his more distant aspirations. One of them is the ability, via computer, to transfer knowledge and thoughts from one person’s mind to another’s.
III. “We Try to Find Ways to Say Yes”
Medicine and biology were of minor interest to darpa until the 1990s, when biological weapons became a threat to U.S. national security. The agency made a significant investment in biology in 1997, when darpa created the Controlled Biological Systems program. The zoologist Alan S. Rudolph managed this sprawling effort to integrate the built world with the natural world. As he explained it to me, the aim was “to increase, if you will, the baud rate, or the cross-communication, between living and nonliving systems.” He spent his days working through questions such as “Could we unlock the signals in the brain associated with movement in order to allow you to control something outside your body, like a prosthetic leg or an arm, a robot, a smart home—or to send the signal to somebody else and have them receive it?”
Human enhancement became an agency priority. “Soldiers having no physical, physiological, or cognitive limitation will be key to survival and operational dominance in the future,” predicted Michael Goldblatt, who had been the science and technology officer at McDonald’s before joining darpa in 1999. To enlarge humanity’s capacity to “control evolution,” he assembled a portfolio of programs with names that sounded like they’d been taken from video games or sci-fi movies: Metabolic Dominance, Persistence in Combat, Continuous Assisted Performance, Augmented Cognition, Peak Soldier Performance, Brain-Machine Interface.
The programs of this era, as described by Annie Jacobsen in her 2015 book, The Pentagon’s Brain, often shaded into mad-scientist territory. The Continuous Assisted Performance project attempted to create a “24/7 soldier” who could go without sleep for up to a week. (“My measure of success,” one darpa official said of these programs, “is that the International Olympic Committee bans everything we do.”)
Dick Cheney relished this kind of research. In the summer of 2001, an array of “super-soldier” programs was presented to the vice president. His enthusiasm contributed to the latitude that President George W. Bush’s administration gave darpa—at a time when the agency’s foundation was shifting. Academic science gave way to tech-industry “innovation.” Tony Tether, who had spent his career working alternately for Big Tech, defense contractors, and the Pentagon, became darpa’s director. After the 9/11 attacks, the agency announced plans for a surveillance program called Total Information Awareness, whose logo included an all-seeing eye emitting rays of light that scanned the globe. The pushback was intense, and Congress took darpa to task for Orwellian overreach. The head of the program—Admiral John Poindexter, who had been tainted by scandal back in the Reagan years—later resigned, in 2003. The controversy also drew unwanted attention to darpa’s research on super-soldiers and the melding of mind and machine. That research made people nervous, and Alan Rudolph, too, found himself on the way out.
In this time of crisis, darpa invited Geoff Ling, a neurology‑ICU physician and, at the time, an active-duty Army officer, to join the Defense Sciences Office. (Ling went on to work in the Biological Technologies Office when it spun out from Defense Sciences, in 2014.) When Ling was interviewed for his first job at darpa, in 2002, he was preparing for deployment to Afghanistan and thinking about very specific combat needs. One was a “pharmacy on demand” that would eliminate the bulk of powdery fillers from drugs in pill or capsule form and instead would formulate active ingredients for ingestion via a lighter, more compact, dissolving substance—like Listerine breath strips. This eventually became a darpa program. The agency’s brazen sense of possibility buoyed Ling, who recalls with pleasure how colleagues told him, “We try to find ways to say yes, not ways to say no.” With Rudolph gone, Ling picked up the torch.
Ling talks fast. He has a tough-guy voice. The faster he talks, the tougher he sounds, and when I met him, his voice hit top speed as he described a first principle of Defense Sciences. He said he had learned this “particularly” from Alan Rudolph: “Your brain tells your hands what to do. Your hands basically are its tools, okay? And that was a revelation to me.” He continued, “We are tool users—that’s what humans are. A human wants to fly, he builds an airplane and flies. A human wants to have recorded history, and he creates a pen. Everything we do is because we use tools, right? And the ultimate tools are our hands and feet. Our hands allow us to work with the environment to do stuff, and our feet take us where our brain wants to go. The brain is the most important thing.”
Ling connected this idea of the brain’s primacy with his own clinical experience of the battlefield. He asked himself, “How can I liberate mankind from the limitations of the body?” The program for which Ling became best known is called Revolutionizing Prosthetics. Since the Civil War, as Ling has said, the prosthetic arm given to most amputees has been barely more sophisticated than “a hook,” and not without risks: “Try taking care of your morning ablutions with that bad boy, and you’re going to need a proctologist every goddamn day.” With help from darpa colleagues and academic and corporate researchers, Ling and his team built something that was once all but unimaginable: a brain-controlled prosthetic arm.
No invention since the internet has been such a reliable source of good publicity for darpa. Milestones in its development were hailed with wonder. In 2012, 60 Minutes showed a paralyzed woman named Jan Scheuermann feeding herself a bar of chocolate using a robotic arm that she manipulated by means of a brain implant.
Yet darpa’s work to repair damaged bodies was merely a marker on a road to somewhere else. The agency has always had a larger mission, and in a 2015 presentation, one program manager—a Silicon Valley recruit—described that mission: to “free the mind from the limitations of even healthy bodies.” What the agency learns from healing makes way for enhancement. The mission is to make human beings something other than what we are, with powers beyond the ones we’re born with and beyond the ones we can organically attain.
Since then, darpa’s work in neurotechnology has avowedly widened in scope, to embrace “the broader aspects of life,” Sanchez told me, “beyond the person in the hospital who is using it to heal.” The logical progression of all this research is the creation of human beings who are ever more perfect, by certain technological standards. New and improved soldiers are necessary and desirable for darpa, but they are just the window-display version of the life that lies ahead.
IV. “Over the Horizon”
Consider memory, Sanchez told me: “Everybody thinks about what it would be like to give memory a boost by 20, 30, 40 percent—pick your favorite number—and how that would be transformative.” He spoke of memory enhancement through neural interface as an alternative form of education. “School in its most fundamental form is a technology that we have developed as a society to help our brains to do more,” he said. “In a different way, neurotechnology uses other tools and techniques to help our brains be the best that they can be.” One technique was described in a 2013 paper, a study involving researchers at Wake Forest University, the University of Southern California, and the University of Kentucky. Researchers performed surgery on 11 rats. Into each rat’s brain, an electronic array—featuring 16 stainless-steel wires—was implanted. After the rats recovered from surgery, they were separated into two groups, and they spent a period of weeks getting educated, though one group was educated more than the other.
The less educated group learned a simple task, involving how to procure a droplet of water. The more educated group learned a complex version of that same task—to procure the water, these rats had to persistently poke levers with their nose despite confounding delays in the delivery of the water droplet. When the more educated group of rats attained mastery of this task, the researchers exported the neural-firing patterns recorded in the rats’ brains—the memory of how to perform the complex task—to a computer.
“What we did then was we took those signals and we gave it to an animal that was stupid,” Geoff Ling said at a darpa event in 2015—meaning that researchers took the neural-firing patterns encoding the memory of how to perform the more complex task, recorded from the brains of the more educated rats, and transferred those patterns into the brains of the less educated rats—“and that stupid animal got it. They were able to execute that full thing.” Ling summarized: “For this rat, we reduced the learning period from eight weeks down to seconds.”
“They could inject memory using the precise neural codes for certain skills,” Sanchez told me. He believes that the Wake Forest experiment amounts to a foundational step toward “memory prosthesis.” This is the stuff of The Matrix. Though many researchers question the findings—cautioning that, really, it can’t be this simple—Sanchez is confident: “If I know the neural codes in one individual, could I give that neural code to another person? I think you could.” Under Sanchez, darpa has funded human experiments at Wake Forest, the University of Southern California, and the University of Pennsylvania, using similar mechanisms in analogous parts of the brain. These experiments did not transfer memory from one person to another, but instead gave individuals a memory “boost.” Implanted electrodes recorded neuronal activity associated with recognizing patterns (at Wake Forest and USC) and memorizing word lists (at Penn) in certain brain circuits. Then electrodes fed back those recordings of neuronal activity into the same circuits as a form of reinforcement. The result, in both cases, was significantly improved memory recall.
Doug Weber, a neural engineer at the University of Pittsburgh who recently finished a four-year term as a darpa program manager, working with Sanchez, is a memory-transfer skeptic. Born in Wisconsin, he has the demeanor of a sitcom dad: not too polished, not too rumpled. “I don’t believe in the infinite limits of technology evolution,” he told me. “I do believe there are going to be some technical challenges which are impossible to achieve.” For instance, when scientists put electrodes in the brain, those devices eventually fail—after a few months or a few years. The most intractable problem is blood leakage. When foreign material is put into the brain, Weber said, “you undergo this process of wounding, bleeding, healing, wounding, bleeding, healing, and whenever blood leaks into the brain compartment, the activity in the cells goes way down, so they become sick, essentially.” More effectively than any fortress, the brain rejects invasion.
Even if the interface problems that limit us now didn’t exist, Weber went on to say, he still would not believe that neuroscientists could enable the memory-prosthesis scenario. Some people like to think about the brain as if it were a computer, Weber explained, “where information goes from A to B to C, like everything is very modular. And certainly there is clear modular organization in the brain. But it’s not nearly as sharp as it is in a computer. All information is everywhere all the time, right? It’s so widely distributed that achieving that level of integration with the brain is far out of reach right now.”
Peripheral nerves, by contrast, conduct signals in a more modular fashion. The biggest, longest peripheral nerve is the vagus. It connects the brain with the heart, the lungs, the digestive tract, and more. Neuroscientists understand the brain’s relationship with the vagus nerve more clearly than they understand the intricacies of memory formation and recall among neurons within the brain. Weber believes that it may be possible to stimulate the vagus nerve in ways that enhance the process of learning—not by transferring experiential memories, but by sharpening the facility for certain skills.
Will an enhanced human being—a human being possessing a neural interface with a computer—still be a human being?
To test this hypothesis, Weber directed the creation of a new program in the Biological Technologies Office, called Targeted Neuroplasticity Training (TNT). Teams of researchers at seven universities are investigating whether vagal-nerve stimulation can enhance learning in three areas: marksmanship, surveillance and reconnaissance, and language. The team at Arizona State has an ethicist on staff whose job, according to Weber, “is to be looking over the horizon to anticipate potential challenges and conflicts that may arise” regarding the ethical dimensions of the program’s technology, “before we let the genie out of the bottle.” At a TNT kickoff meeting, the research teams spent 90 minutes discussing the ethical questions involved in their work—the start of a fraught conversation that will broaden to include many others, and last for a very long time.
darpa officials refer to the potential consequences of neurotechnology by invoking the acronym elsi, a term of art devised for the Human Genome Project. It stands for “ethical, legal, social implications.” The man who led the discussion on ethics among the research teams was Steven Hyman, a neuroscientist and neuroethicist at MIT and Harvard’s Broad Institute. Hyman is also a former head of the National Institute of Mental Health. When I spoke with him about his work on darpa programs, he noted that one issue needing attention is “cross talk.” A man-machine interface that does not just “read” someone’s brain but also “writes into” someone’s brain would almost certainly create “cross talk between those circuits which we are targeting and the circuits which are engaged in what we might call social and moral emotions,” he said. It is impossible to predict the effects of such cross talk on “the conduct of war” (the example he gave), much less, of course, on ordinary life.
Weber and a darpa spokesperson related some of the questions the researchers asked in their ethics discussion: Who will decide how this technology gets used? Would a superior be able to force subordinates to use it? Will genetic tests be able to determine how responsive someone would be to targeted neuroplasticity training? Would such tests be voluntary or mandatory? Could the results of such tests lead to discrimination in school admissions or employment? What if the technology affects moral or emotional cognition—our ability to tell right from wrong or to control our own behavior?
Recalling the ethics discussion, Weber told me, “The main thing I remember is that we ran out of time.”
V. “You Can Weaponize Anything”
In The Pentagon’s Brain, Annie Jacobsen suggested that darpa’s neurotechnology research, including upper-limb prosthetics and the brain-machine interface, is not what it seems: “It is likely that darpa’s primary goal in advancing prosthetics is to give robots, not men, better arms and hands.” Geoff Ling rejected the gist of her conclusion when I summarized it for him (he hadn’t read the book). He told me, “When we talk about stuff like this, and people are looking for nefarious things, I always say to them, ‘Do you honestly believe that the military that your grandfather served in, your uncle served in, has changed into being Nazis or the Russian army?’ Everything we did in the Revolutionizing Prosthetics program—everything we did—is published. If we were really building an autonomous-weapons system, why would we publish it in the open literature for our adversaries to read? We hid nothing. We hid not a thing. And you know what? That meant that we didn’t just do it for America. We did it for the world.”
I started to say that publishing this research would not prevent its being misused. But the terms use and misuse overlook a bigger issue at the core of any meaningful neurotechnology-ethics discussion. Will an enhanced human being—a human being possessing a neural interface with a computer—still be human, as people have experienced humanity through all of time? Or will such a person be a different sort of creature?
The U.S. government has put limits on darpa’s power to experiment with enhancing human capabilities. Ling says colleagues told him of a “directive”: “Congress was very specific,” he said. “They don’t want us to build a superperson.” This can’t be the announced goal, Congress seems to be saying, but if we get there by accident—well, that’s another story. Ling’s imagination remains at large. He told me, “If I gave you a third eye, and the eye can see in the ultraviolet, that would be incorporated into everything that you do. If I gave you a third ear that could hear at a very high frequency, like a bat or like a snake, then you would incorporate all those senses into your experience and you would use that to your advantage. If you can see at night, you’re better than the person who can’t see at night.”
Enhancing the senses to gain superior advantage—this language suggests weaponry. Such capacities could certainly have military applications, Ling acknowledged—“You can weaponize anything, right?”—before he dismissed the idea and returned to the party line: “No, actually, this has to do with increasing a human’s capability” in a way that he compared to military training and civilian education, and justified in economic terms.
“Let’s say I gave you a third arm,” and then a fourth arm—so, two additional hands, he said. “You would be more capable; you would do more things, right?” And if you could control four hands as seamlessly as you’re controlling your current two hands, he continued, “you would actually be doing double the amount of work that you would normally do. It’s as simple as that. You’re increasing your productivity to do whatever you want to do.” I started to picture his vision—working with four arms, four hands—and asked, “Where does it end?”
“It won’t ever end,” Ling said. “I mean, it will constantly get better and better—” His cellphone rang. He took the call, then resumed where he had left off: “What darpa does is we provide a fundamental tool so that other people can take those tools and do great things with them that we’re not even thinking about.”
Judging by what he said next, however, the number of things that darpa is thinking about far exceeds what it typically talks about in public. “If a brain can control a robot that looks like a hand,” Ling said, “why can’t it control a robot that looks like a snake? Why can’t that brain control a robot that looks like a big mass of Jell-O, able to get around corners and up and down and through things? I mean, somebody will find an application for that. They couldn’t do it now, because they can’t become that glob, right? But in my world, with their brain now having a direct interface with that glob, that glob is the embodiment of them. So now they’re basically the glob, and they can go do everything a glob can do.”
VI. Gold Rush
darpa’s developing capabilities still hover at or near a proof-of-concept stage. But that’s close enough to have drawn investment from some of the world’s richest corporations. In 1990, during the administration of President George H. W. Bush, darpa Director Craig I. Fields lost his job because, according to contemporary news accounts, he intentionally fostered business development with some Silicon Valley companies, and White House officials deemed that inappropriate. Since the administration of the second President Bush, however, such sensitivities have faded.
Over time, darpa has become something of a farm team for Silicon Valley. Regina Dugan, who was appointed darpa director by President Barack Obama, went on to head Google’s Advanced Technology and Projects group, and other former darpa officials went to work for her there. She then led R&D for the analogous group at Facebook, called Building 8. (She has since left Facebook.)
darpa’s neurotechnology research has been affected in recent years by corporate poaching. Doug Weber told me that some darpa researchers have been “scooped up” by companies including Verily, the life-sciences division of Alphabet (the parent company of Google), which, in partnership with the British pharmaceutical conglomerate GlaxoSmithKline, created a company called Galvani Bioelectronics, to bring neuro-modulation devices to market. Galvani calls its business “bioelectric medicine,” which conveys an aura of warmth and trustworthiness. Ted Berger, a University of Southern California biomedical engineer who collaborated with the Wake Forest researchers on their studies of memory transfer in rats, worked as the chief science officer at the neurotechnology company Kernel, which plans to build “advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition.” Elon Musk has courted darpa researchers to join his company Neuralink, which is said to be developing an interface known as “neural lace.” Facebook’s Building 8 is working on a neural interface too. In 2017, Regina Dugan said that 60 engineers were at work on a system with the goal of allowing users to type 100 words a minute “directly from your brain.” Geoff Ling is on Building 8’s advisory board.
Talking with Justin Sanchez, I speculated that if he realizes his ambitions, he could change daily life in even more fundamental and lasting ways than Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey have. Sanchez blushes easily, and he breaks eye contact when he is uncomfortable, but he did not look away when he heard his name mentioned in such company. Remembering a remark that he had once made about his hope for neurotechnology’s wide adoption, but with “appropriate checks to make sure that it’s done in the right way,” I asked him to talk about what the right way might look like. Did any member of Congress strike him as having good ideas about legal or regulatory structures that might shape an emerging neural-interface industry? He demurred (“darpa’s mission isn’t to define or even direct those things”) and suggested that, in reality, market forces would do more to shape the evolution of neurotechnology than laws or regulations or deliberate policy choices. What will happen, he said, is that scientists at universities will sell their discoveries or create start-ups. The marketplace will take it from there: “As they develop their companies, and as they develop their products, they’re going to be subject to convincing people that whatever they’re developing makes sense, that it helps people to be a better version of themselves. And that process—that day-to-day development—will ultimately guide where these technologies go. I mean, I think that’s the frank reality of how it ultimately will unfold.”
He seemed entirely untroubled by what may be the most troubling aspect of darpa’s work: not that it discovers what it discovers, but that the world has, so far, always been ready to buy it.
This article appears in the November 2018 print edition with the headline “The Pentagon Wants to Weaponize the Brain. What Could Go Wrong?”
DARPA on Twitter
The race to read your brain: Sci-fi has been predicting it for years, and now it’s fast becoming reality… TOM LEONARD examines the chilling truth behind tech tycoons’ attempts at getting inside your head
By Tom Leonard, in New York for The Daily Mail
Posted on https://www.dailymail.co.uk/
As you lie sedated on the hospital bed, a robot’s arm reaches out to the side of your head.
Suddenly, a sharp beam of light springs out of its finger as it starts to drill into your skull with an intense laser beam.
The operation is relatively quick and painless, and when you walk out of the room later that day, you don’t feel different in the slightest.
If it weren’t for the fact that you can now not only play video games or move a computer mouse but even communicate with others simply by using your thoughts, you might not even realise that a tiny microchip has been implanted in your skull.
This sounds like a dystopian plot from a futuristic sci-fi film. But it may not be fiction for much longer.
Astonishing developments in neuroscience are set to transform our understanding of the human mind, not only giving us an ability to read our thought processes, but even to ‘supercharge’ them by linking them to computers.
Is this proof that man and machine really can reach a perfect symbiosis?
Or is it a terrifying technological nightmare finally come to fruition? On paper, the latest breakthrough by scientists at the Kernel neurotechnology company in California certainly doesn’t sound that alarming.
Last week, it announced it had devised a helmet which can see and record brain activity.
Of the two brain-monitoring devices Kernel has created, the first — called Flux — monitors electromagnetic activity, while the second — Flow — measures blood movement by pulsing the brain with light.
Combined, the two helmets will allow scientists to analyse neurons, the brain cells that transmit nerve impulses, as they fire. And in doing so, Kernel has effectively pulled off a monumental technology shrinking exercise.
Until now, machines capable of doing this were neither portable nor cheap. They take up an entire room, cost around $1million and require highly-trained technicians to operate them.
The subject has to be strapped down, often in extremely cold temperatures and inside the machine — hardly ideal conditions.
Instead, the Kernel headwear, the size of a bicycle helmet with a web of sensors that sit around the skull, will cost around $5,000 and could be worn as the subject goes about daily life, so providing far more insight into thought processes.
The project was, for want of a better word, the brainchild of Bryan Johnson, a Silicon Valley entrepreneur who made $800million from digital payments system Braintree.
Using his windfall, he hired a team of experts, which included neuroscientists and specialists on microchips and lasers, to spend the past four years working on the helmet.
The details of the project have been kept secret — Kernel hasn’t even released a photo of its creation — but Mr Johnson, 42, claims it ‘triggers a new era of access to the mind and the ability to ask all sorts of new questions about ourselves’.
Whatever the full truth about his creation, it’s clear that in recent years mind-reading technology has become something of a holy grail in Silicon Valley.
Indeed, just this month, Elon Musk, the eccentric billionaire entrepreneur behind Tesla electric cars and SpaceX rockets, announced that his own brain technology company, Neuralink, could be ready to put its first mind-reading implant into a human recipient ‘within a year’ — with the chip inserted directly into the skull.
The company, in which Musk has invested more than $100million, has discovered how to insert tiny wires into the brains of rodents and primates, and has been working on how to do it in people.
With both monkeys and rats, the animals were able to move a computer mouse across a screen just by thinking about it.
A so-called ‘neural lace’, a mesh of 3,000 electrodes attached to wires much thinner than a human hair, will be robotically inserted into the human brain via small holes bored into the skull by a laser.
(The Kernel team also initially considered a brain implant as it provides a much clearer connection to the neurons.
But then, perhaps wisely, decided people would far prefer to have something like a hat that they could put on and take off.)
These thread-like sensors pick up brain activity and, using a tiny processor with a wireless transmitter implanted behind the ear, exchange it with information on an external computer or smartphone.
Musk has dubbed Neuralink a ‘wizard hat for the brain’, an apt description given that his claims for what it could do would put it deep into the realm of what was previously considered the supernatural.
They include the ability to control computers and phones with one’s mind, to upload and download thoughts and to communicate with other people by telepathy.
Many who find such a prospect terrifying won’t be encouraged by Musk’s justification for such a device: he’s convinced that if we don’t pair our minds with machines, we risk being destroyed in the future by super-intelligent robots.
‘Ultimately, we will achieve symbiosis with artificial intelligence,’ he has predicted.
If the idea of becoming a cyborg — part-man, partmachine — doesn’t fill you with joy, you can look forward to some more down-to-earth medical benefits from all this expensive research.
Both Johnson and Musk insist they want their inventions to advance medical science.
The former hopes people suffering from strokes or paralysis could use a Kernel helmet to communicate simply by thinking.
Musk claims the Neuralink implant could repair motor function in the paralysed, restore eyesight and hearing, and help those suffering memory loss through dementia or Alzheimer’s.
And just as digital devices such as Apple Watches and Fitbits now monitor wearers’ heart rates and the number of steps they take, there could soon be brain apps that can keep track of our creativity, happiness and honesty.
Kernel has already conducted research that shows its helmet can identify any song its wearer is listening to just by observing their brain and how it reacts.
Meanwhile, the University of Washington has built a ‘brain-to-brain network’ that allows three people to play a simple Tetrisstyle computer game with each other using just their thoughts.
However, now tech billionaires are pouring hundreds of millions of dollars into reading our thoughts, we don’t need to read theirs to know that Silicon Valley rarely does anything purely out of the goodness of its own heart.
For while its breakthroughs are often initially hailed as socially beneficial, its predatory intentions — such as shamelessly snooping on our lives and ruthlessly manipulating users to get them hooked — tend to emerge later.
Take Facebook, which last year reportedly spent $1billion buying a company, CTRL-labs, which is researching mind-reading technology and has developed a wristband that is said to decode electrical signals from the wearer’s brain, allowing them to control a computer using their thoughts.
Given the social media giant’s dreadful record for surreptitiously harvesting the personal information of users, the news that it may soon have direct access to our brains has understandably filled many people with dread.
As well as logging everything we do online, do we really want Facebook’s unscrupulous multi-billionaire boss Mark Zuckerberg knowing what’s going on in our minds as well?
As for the likes of Musk and Johnson, their sceptics maintain that while it may be possible to soon recognise individual thoughts — such as a paraplegic’s desire to lift their arm — actually reading a train of thought is way beyond us for the moment.
Many of us will be only too happy to hear it.
Six Paths to the Nonsurgical Future of Brain-Machine Interfaces
Posted on DARPA official website https://www.darpa.mil
Teams selected for DARPA’s Next-Generation Nonsurgical Neurotechnology program will pursue a mix of approaches to developing wearable interfaces for communicating with the brain
DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.
“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.”
Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.
For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. Yet, in fact, similar technology could greatly benefit clinical populations as well. By removing the need for surgery, N3 systems seek to expand the pool of patients who can access treatments such as deep brain stimulation to manage neurological illnesses.
The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. The research is split between two tracks. Teams are pursuing either completely noninvasive interfaces that are entirely external to the body or minutely invasive interface systems that include nanotransducers that can be temporarily and nonsurgically delivered to the brain to improve signal resolution.
- The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive interface system that pairs an external transceiver with electromagnetic nanotransducers that are nonsurgically delivered to neurons of interest. The nanotransducers would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication.
- The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity. The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types.
- The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity.
- The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
- The Rice University team, under principal investigator Dr. Jacob Robinson, aims to develop a minutely invasive, bidirectional system for recording from and writing to the brain. For the recording function, the interface will use diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. To enable the write function, the team will use a magneto-genetic approach to make neurons sensitive to magnetic fields.
- The Teledyne team, under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.
Throughout the program, the research will benefit from insights provided by independent legal and ethical experts who have agreed to provide insights on N3 progress and consider potential future military and civilian applications and implications of the technology. Additionally, federal regulators are cooperating with DARPA to help the teams better understand human-use clearance as research gets underway. As the work progresses, these regulators will help guide strategies for submitting applications for Investigational Device Exemptions and Investigational New Drugs to enable human trials of N3 systems during the last phase of the four-year program.
“If N3 is successful, we’ll end up with wearable neural interface systems that can communicate with the brain from a range of just a few millimeters, moving neurotechnology beyond the clinic and into practical use for national security,” Emondi said. “Just as service members put on protective and tactical gear in preparation for a mission, in the future they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete.”
Additional details of the program schedule and metrics are available in the 2018 broad agency announcement: https://go.usa.gov/xmK4s.
Rice researchers given $18M grant to develop mind-reading helmet
They’re developing a device that reads minds.
“What we’re trying to do is we’re trying to develop a faster way for you to communicate with your devices and other people,” said Jacob Robinson, associate professor of electrical computer engineering at Rice University.
Robinson’s team is leading the effort, which is funded by the Department of Defense.
“The speed at which you can imagine sending a message is a lot faster than you can physically move your fingers over your phone. Imagine if you could send that message almost instantaneously,” he said.
Their idea is to build a helmet that transmits thoughts through light pulses and magnetic fields without surgery.
The device will only access the part of the brain that processes intentional thought. Even though it will not access stream of consciousness, researchers are working with neuro-ethics experts to make sure the technology isn’t misused.
The government is giving the team $18 million to build a device that works in four years.
“I’m not confident this is possible. We’re hopeful. We’re optimistic. We’re trying something that’s really hard,” Robinson said.
One reason researchers are optimistic is the progress they’ve already made with fruit flies. In the last few weeks, scientists have proven they can affect specific brain cells remotely.
“We know there are specific brain cells in the fruit fly that control its mating ritual. One of the things that’s characteristic of this mating ritual is the animal will extend its wings. You’ll see when we turn the magnetic field on here, the fly starts to go through its mating ritual,” Robinson said.
Next, they’ll try to do the same with rodents, primates, then people.
It isn’t hard to imagine why the military wants to send information to soldiers, computers or even drones at the speed of thought.
However, the researchers believe the technology can help people with neurological disorders, too.
“We think the first way these types of technologies could be useful is for people who have lost the ability to see. Then can stimulate a pattern of activity in the brain, to recreate an image of a house, car or the scene in front of them,” Robinson said.
It’s an ambitious idea, but only an idea at this point. The government hopes employing the brightest minds, and contributing a lot of money, will make it a reality.
Brown to receive up to $19M to engineer next-generation brain-computer interface
By Kevin Stacey
Posted on https://www.brown.edu/news
on July 10, 2017
PROVIDENCE, R.I. [Brown University] — With a grant of up to $19 million from the Defense Advanced Research Projects Agency (DARPA), Brown University will lead a collaboration to develop a fully implantable wireless brain interface system able to record and stimulate neural activity with unprecedented detail and precision.
The international team of engineers, neuroscientists and physicians involved in the project envisions an approach to neural interfaces that is unlike any available today. They aim to create a “cortical intranet” of tens of thousands of wireless micro-devices — each about the size of a grain of table salt — that can be safely implanted onto or into the cerebral cortex, the outer layer of the brain. The implants, dubbed “neurograins,” will operate independently, interfacing with the brain at the level of a single neuron. The activity of the devices will be coordinated wirelessly by a central communications hub in the form of a thin electronic patch worn on the skin or implanted beneath it.
The system will be designed to have both “read-out” and “write-in” capabilities. It will be able to record neural activity, helping to deepen scientists’ understanding of how the brain processes stimuli from the outside world. It will also have the capability to stimulate neural activity through tiny electrical pulses, a function researchers hope to eventually use in human clinical research aimed at restoring brain function lost to injury or disease.
“What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible,” said Arto Nurmikko, L. Herbert Ballou University Professor of Engineering at Brown and the project’s principal investigator. “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.”
The research team will include researchers from Brown, IMEC (a Belgian microtechnology institute), Massachusetts General Hospital, Stanford University, the University of California, Berkeley, the University of California, San Diego, the mobile telecommunications firm Qualcomm, and the Wyss Center for Bio and Neuroengineering in Geneva. The funding, to be distributed over four years, comes from DARPA’s new Neural Engineering System Design (NESD) program, which is aimed at developing new devices “able to provide advanced signal resolution and data-transfer bandwidth between the brain and electronics.”
At Brown, the work will build on decades of research in neuroengineering and brain-computer interfaces, computational neuroscience and clinical therapeutics through the Brown Institute for Brain Science, the University’s Warren Alpert Medical School and its School of Engineering.
“Brown has a tradition of innovative multidisciplinary research in brain science, especially with projects that have the potential to transform lives through technology-assisted repair of neurological injuries,” said Jill Pipher, vice president for research at Brown. “This new grant enables a group of outstanding Brown researchers to develop leading-edge technology and solve new computational problems in a quest to understand human brain functionality at a totally new scale.”
Four Brown faculty members will serve as co-investigators on the project. Leigh Hochberg is a professor of engineering and one of the leaders of the BrainGate consortium, which develops and tests brain-computer interfaces through ongoing human clinical trials. David Borton is an assistant professor of engineering who previously worked with Nurmikko to develop the first fully implantable brain sensor that could transmit information wirelessly. Larry Larson, Sorensen Family Dean of Engineering, is a leader in semiconductor microwave technology and wireless communication. Wilson Truccolo, an assistant professor of neuroscience, has developed unique theoretical and computational approaches to decoding and encoding neural signals from cortical microcircuits. Each will lend their expertise to the project alongside the team’s experts from leading institutions in the U.S. and abroad, with additional collaboration with companies such as software developer Intel Nervana.
“This is an ambitious project that will require a convergence of expertise from across disciplines,” Larson said. “We work very hard to make the School of Engineering the kind of place where these kinds of projects thrive, and we’re very much looking forward to the work ahead of us.”
New challenges, new technologies
The project will involve many daunting technical challenges, Nurmikko said, which include completing development of the tiny neurograin sensors and coordinating their activity.
“We need to make the neurograins small enough to be minimally invasive but with extraordinary technical sophistication, which will require state-of-the-art microscale semiconductor technology.” Nurmikko said. “Additionally, we have the challenge of developing the wireless external hub that can process the signals generated by large populations of spatially distributed neurograins at the same time. This is probably the hardest endeavor in my career.”
Then there’s the challenge of dealing with all of the data the system produces. Current state-of-the-art brain-computer interfaces sample the activity of 100 or so neurons. For this project, the team wants to start at 1,000 neurons and build from there up to 100,000.
“When you increase the number of neurons tenfold, you increase the amount of data you need to manage by much more than that because the brain operates through nested and interconnected circuits,” Nurmikko said. “So this becomes an enormous big data problem for which we’ll need to develop new computational neuroscience tools.”
The team will first apply new technologies to the sensory and auditory function in mammals. The level of detail expected from the neurograin system, the researchers say, should yield an entirely new level of understanding of sensory processes in the brain.
“We aim to be able to read out from the brain how it processes, for example, the difference between touching a smooth, soft surface and a rough, hard one and then apply microscale electrical stimulation directly to the brain to create proxies of such sensation,” Nurmikko said. “Similarly, we aim to advance our understanding of how the brain processes and makes sense of the many complex sounds we listen to every day, which guide our vocal communication in a conversation and stimulate the brain to directly experience such sounds.”
The Brown-led team is one of six research teams to be awarded grants from DARPA under the NESD program, which was launched last year. Other awarded projects will be led by researchers at Columbia University, the Foundation for Vision and Understanding, John B. Pierce Laboratory, Paradromics and the University of California, Berkeley.
Mark Zuckerberg wants a wearable device that can read your thoughts to ‘control something in virtual or augmented reality’
By James Pero
Posted on dailymail.com
on 10 October 2019
- Mark Zuckerberg discussed Facebook’s goals for brain-reading tech
- A wearable that can read brains signals could be used in augmented reality
- The company bought brain-to-computer interface company CTRL-labs
- Some users may need to have brain-reading tech implanted into the brains
Facebook CEO Mark Zuckerberg got candid with the company’s intention of creating wearables that can read people’s brains.
At an ongoing discussion series run by Facebook, Zuckerberg talked about the idea of technology that can translate brain signals into useful information for machines in two distinct arenas.
‘The goal is to eventually make it so that you can think something and control something in virtual or augmented reality,’ said Zuckerberg, in a the discussion which also included Dr. Joe DeRisi and Dr. Steve Quake of the Chan Zuckerberg Biohub, a medical science research center funded by Zuckerberg and his wife Priscilla Chan.
Facebook has already been steadily making progress in all of the above arenas, including strides in virtual reality technology through its VR hardware company Occulus, its rumored AR glasses, and more recently through investments in brain-to-computer interfaces.
In an undisclosed deal worth between $500 million to $1 billion, Facebook purchased a company called CTRL-labs which has been pioneering technology in the world of brain-to-computer interfaces.
Specifically, CTRL-labs is known to be developing on a watch-style device which intercepts signals which are sent from the brain to the fingers, in order to control a phone.
It works by assigning particular nerve messages from the brain to certain commands in the computer, which could in theory cut out the need to actually press any buttons.
Zuckerberg says that while signals could be read superficially, without having to surgically implant a device, there are some cases in which a harder approach may be necessary.
‘I have enough neural capacity in my motor neurons to probably control another extra hand, it’s just a matter of training that and then they can pick up those signals off of the wrist,’ Zuckerberg said during the discussion.
‘But if your ability to translate things that are going on in your brain into motor activity is limited then you need something implanted.’
Whether Facebook is prepared to go to those invasive lengths to mesh humans with computers, however, is another question.
In a leaked transcript recently published by The Verge, Zuckerberg said that the company intends to focus mostly on devices that can read brain signals without actually having to be embedded into one’s skull.
‘We’re more focused on — I think completely focused on non-invasive. [Laughter] We’re trying to make AR and VR a big thing in the next five years to 10 years …,’ reads the transcript.
‘I don’t know, you think Libra is hard to launch. “Facebook wants to perform brain surgery,” I don’t want to see the congressional hearings on that one.’
Conversely, another player in the world of brain-to-computer interfaces, Neuralink, which is owned by Elon musk, has been focusing more on invasive ways of meshing human brains and computers.
Putting vision models to the test. Study shows that artificial neural networks can be used to drive brain activity
Anne Trafton | MIT News Office
Posted on http://news.mit.edu
on May 2, 2019
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
“People have questioned whether these models provide understanding of the visual system,” he says. “Rather than debate that in an academic sense, we showed that these models are already powerful enough to enable an important new application. Whether you understand how the model works or not, it’s already useful in that sense.”
MIT postdocs Pouya Bashivan and Kohitij Kar are the lead authors of the paper, which appears in the May 2 online edition of Science.
Over the past several years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts out with an arbitrary architecture consisting of model neurons, or nodes, that can be connected to each other with different strengths, also called weights.
The researchers then train the models on a library of more than 1 million images. As the researchers show the model each image, along with a label for the most prominent object in the image, such as an airplane or a chair, the model learns to recognize objects by changing the strengths of its connections.
It’s difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues have previously shown that the “neurons” within these models produce activity patterns very similar to those seen in the animal visual cortex in response to the same images.
In the new study, the researchers wanted to test whether their models could perform some tasks that previously have not been demonstrated. In particular, they wanted to see if the models could be used to control neural activity in the visual cortex of animals.
“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” Bashivan says. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”
To achieve this, the researchers first created a one-to-one map of neurons in the brain’s visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time.
“Once each neuron has an assignment, the model allows you to make predictions about that neuron,” DiCarlo says.
The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by “natural” images similar to those used to train the neural networks.
The researchers found that when they showed animals these “synthetic” images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model. This kind of control has never been reported before.
“That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who was not involved in the study. “This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks.”
In a similar set of experiments, the researchers attempted to generate images that would drive one neuron maximally while also keeping the activity in nearby neurons very low, a more difficult task. For most of the neurons they tested, the researchers were able to enhance the activity of the target neuron with little increase in the surrounding neurons.
“A common trend in neuroscience is that experimental data collection and computational modeling are executed somewhat independently, resulting in very little model validation, and thus no measurable progress. Our efforts bring back to life this ‘closed loop’ approach, engaging model predictions and neural measurements that are critical to the success of building and testing models that will most resemble the brain,” Kar says.
The researchers also showed that they could use the model to predict how neurons of area V4 would respond to synthetic images. Most previous tests of these models have used the same type of naturalistic images that were used to train the model. The MIT team found that the models were about 54 percent accurate at predicting how the brain would respond to the synthetic images, compared to nearly 90 percent accuracy when the natural images are used.
“In a sense, we’re quantifying how accurate these models are at making predictions outside the domain where they were trained,” Bashivan says. “Ideally the model should be able to predict accurately no matter what the input is.”
The researchers now hope to improve the models’ accuracy by allowing them to incorporate the new information they learn from seeing the synthetic images, which was not done in this study.
This kind of control could be useful for neuroscientists who want to study how different neurons interact with each other, and how they might be connected, the researchers say. Farther in the future, this approach could potentially be useful for treating mood disorders such as depression. The researchers are now working on extending their model to the inferotemporal cortex, which feeds into the amygdala, which is involved in processing emotions.
“If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” Bashivan says.
The research was funded by the Intelligence Advanced Research Projects Agency, the MIT-IBM Watson AI Lab, the National Eye Institute, and the Office of Naval Research.
Researchers Translate Brain Signals Directly Into Speech
Posted on https://neurosciencenews.com on
Summary: Researchers have developed a new system which utilizes artificial intelligence technology to turn brain signals to recognizable speech. The breakthrough could help restore a voice to those with limited, or no ability, to speak.
Source: Zuckerman Institute.
In a scientific first, Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone’s brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. It also lays the groundwork for helping people who cannot speak, such as those living with as amyotrophic lateral sclerosis (ALS) or recovering from stroke, regain their ability to communicate with the outside world.
These findings were published today in Scientific Reports.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” said Nima Mesgarani, PhD, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain — but instead could be translated into verbal speech at will.
But accomplishing this feat has proven challenging. Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies.
But because this approach has failed to produce anything resembling intelligible speech, Dr. Mesgarani’s team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking.
“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Dr. Mesgarani, who is also an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science.
To teach the vocoder to interpret to brain activity, Dr. Mesgarani teamed up with Ashesh Dinesh Mehta, MD, PhD, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and co-author of today’s paper. Dr. Mehta treats epilepsy patients, some of whom must undergo regular surgeries.
“Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity,” said Dr. Mesgarani. “These neural patterns trained the vocoder.”
Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder. The sound produced by the vocoder in response to those signals was analyzed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.
The end result was a robotic-sounding voice reciting a sequence of numbers. To test the accuracy of the recording, Dr. Mesgarani and his team tasked individuals to listen to the recording and report what they heard.
“We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts,” said Dr. Mesgarani. The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts. “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”
Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
Funding: This research was supported by the National Institutes of Health (DC014279), the Pew Charitable Trusts and the Pew Biomedical Scholars Program.
Source: Anne Holden – Zuckerman Institute
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Open access research for “Towards reconstructing intelligible speech from the human auditory cortex” by Hassan Akbari, Bahar Khalighinejad, Jose L. Herrero, Ashesh D. Mehta & Nima Mesgarani in Scientific Reports. Published January 29 2019.
New Technology Uses Lasers to Transmit Audible Messages to Specific People
Posted on https://www.osa.org on the 23 January 2019
Photoacoustic communication approach could send warning messages through the air without requiring a receiving device.
WASHINGTON — Researchers have demonstrated that a laser can transmit an audible message to a person without any type of receiver equipment. The ability to send highly targeted audio signals over the air could be used to communicate across noisy rooms or warn individuals of a dangerous situation such as an active shooter.
In The Optical Society (OSA) journal Optics Letters, researchers from the Massachusetts Institute of Technology’s Lincoln Laboratory report using two different laser-based methods to transmit various tones, music and recorded speech at a conversational volume.
“Our system can be used from some distance away to beam information directly to someone’s ear,” said research team leader Charles M. Wynn. “It is the first system that uses lasers that are fully safe for the eyes and skin to localize an audible signal to a particular person in any setting.”
Creating sound from air
The new approaches are based on the photoacoustic effect, which occurs when a material forms sound waves after absorbing light. In this case, the researchers used water vapor in the air to absorb light and create sound.
“This can work even in relatively dry conditions because there is almost always a little water in the air, especially around people,” said Wynn. “We found that we don’t need a lot of water if we use a laser wavelength that is very strongly absorbed by water. This was key because the stronger absorption leads to more sound.”
One of the new sound transmission methods grew from a technique called dynamic photoacoustic spectroscopy (DPAS), which the researchers previously developed for chemical detection. In the earlier work, they discovered that scanning, or sweeping, a laser beam at the speed of sound could improve chemical detection.
“The speed of sound is a very special speed at which to work,” said Ryan M. Sullenberger, first author of the paper. “In this new paper, we show that sweeping a laser beam at the speed of sound at a wavelength absorbed by water can be used as an efficient way to create sound.”
For the DPAS-related approach, the researchers change the length of the laser sweeps to encode different frequencies, or audible pitches, in the light. One unique aspect of this laser sweeping technique is that the signal can only be heard at a certain distance from the transmitter. This means that a message could be sent to an individual, rather than everyone who crosses the beam of light. It also opens the possibility of targeting a message to multiple individuals.
In the lab, the researchers showed that commercially available equipment could transmit sound to a person more than 2.5 meters away at 60 decibels using the laser sweeping technique. They believe that the system could be easily scaled up to longer distances. They also tested a traditional photoacoustic method that doesn’t require sweeping the laser and encodes the audio message by modulating the power of the laser beam.
“There are tradeoffs between the two techniques,” said Sullenberger. “The traditional photoacoustics method provides sound with higher fidelity, whereas the laser sweeping provides sound with louder audio.”
Next, the researchers plan to demonstrate the methods outdoors at longer ranges. “We hope that this will eventually become a commercial technology,” said Sullenberger. “There are a lot of exciting possibilities, and we want to develop the communication technology in ways that are useful.”
Paper: R. M. Sullenberger, S. Kaushik, C. M. Wynn. “Photoacoustic communications: delivering audible signals via absorption of light by atmospheric H2O,” Opt. Lett., 44, 3, 622-625 (2019).
About Optics Letters
Optics Letters offers rapid dissemination of new results in all areas of optics with short, original, peer-reviewed communications. Optics Letters covers the latest research in optical science, including optical measurements, optical components and devices, atmospheric optics, biomedical optics, Fourier optics, integrated optics, optical processing, optoelectronics, lasers, nonlinear optics, optical storage and holography, optical coherence, polarization, quantum electronics, ultrafast optical phenomena, photonic crystals and fiber optics.
About The Optical Society
Founded in 1916, The Optical Society (OSA) is the leading professional organization for scientists, engineers, students and business leaders who fuel discoveries, shape real-life applications and accelerate achievements in the science of light. Through world-renowned publications, meetings and membership initiatives, OSA provides quality research, inspired interactions and dedicated resources for its extensive global network of optics and photonics experts. For more information, visit osa.org.
Bridging the Bio-Electronic Divide
New effort aims for fully implantable devices able to connect with up to one million neurons
Posted on DARPA official website https://www.darpa.mil on 1/19/2016
A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.
The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”
Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.
Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.
To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.
To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought is available at: http://go.usa.gov/cP474.
DARPA anticipates investing up to $65 million in the NESD program over four years.
NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit: http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.
China introduces artificial intelligence thought police to improve worker efficiency, military loyalty
By Jamie Seidel
Posted on https://www.news.com.au on May 2, 2018
FOR factory workers in China, there’s no such thing as privacy, as employers have begun forcing staff members to wear devices that monitor their mental states.
FACTORY workers. Military personnel. Train drivers. If you’re employed in China, your thoughts are not your own.
Headwear with built in sensors is being distributed through China’s state-owned companies to monitor the brain waves of their workers.
Details on the “emotional surveillance” device are thin.
All we know is the state of your mind as determined by the sensors in your hat is being transmitted to a central artificial intelligence algorithm intended to identify thoughts of anger, anxiety and sadness.
The Post says the technology was introduced to a dozen military and business sites in 2014. It cites one state-owned company, State Grid Zhejiang Electric Power, reporting a $US315 million leap in profits once the sensors were fitted to its 40,000 staff.
“They thought we could read their mind. This caused some discomfort and resistance in the beginning,” Jin Jia, a professor of brain science at Ningbo University told the Post.
“After a while they got used to the device … They wore it all day at work.”
A similar set of sensors is being used in the caps of train drivers on a high-speed line between Beijing and Shanghai. It’s intended to monitor concentrations levels — and if the drivers fall asleep.
In both cases, the results were reportedly used to tailor the frequency and lengths of rest breaks — or even sending workers home — to maximise overall efficiency.
Such surveillance technology fits a growing push in China to assign every individual citizen a secret ‘loyalty score’.
If they say the wrong things on social media. If they fail to attend official functions. If their performance slips … they get points deducted. If they’re seen to promote the Party line and be productive, they get bonus points.
It’s a system already having a real life-impact in China.
One way you can discover you’re out of favour with the ruling party is to have your purchase of train or airline tickets declined.
And police are already trialling portable face-recognition software that combines with their sunglasses to detect ‘persons of interest’ in crowds.
Researchers from the Massachusetts Institute of Technology (MIT) doubt the ‘thought caps’ are more than an intimidation tactic.
Do they actually work?
“Yeah, probably not,” the MIT Technology Review says. “Over-the-skin brain scanning through EEG is still very limited in what it can detect, and the relationship between those signals and human emotion is not yet clear. Being able to gather enough information to somehow get a two billion yuan ($US315 million) boost in profits — which is what one firm, State Grid Zhejiang Electric Power, claims in the piece — is incredibly doubtful.”
The MIT Review states claims about the technology’s efficacy are almost certainly being embellished.
“If it’s just an attempt to talk up a technological ‘breakthrough,’ that’s one thing. But (is it) being used to reassign workers — or potentially even terminate them — because of their perceived emotions? In that case, China is indeed leading the way in workplace surveillance in a way that stands to benefit no one.”
Frighteningly accurate ‘mind reading’. AI reads brain scans to guess what you’re thinking
By Luke Dormehl
Posted on https://www.digitaltrends.com on June 28, 2017
From medical applications like helping dermatologists diagnose skin cancer to teaching robots to get a better grip on the world around them, deep learning neural networks can carry out some pretty impressive tasks. Could mind reading be among them?
The folks at Carnegie Mellon University certainly think so — and they’ve got the research to back up their theories. What CMU scientists have been working on is a system that can apparently read complex thoughts based on brain scans, possibly even interpreting complete sentences.
This involved gathering data from a functional magnetic resonance imaging (fMRI) machine, and then using AI machine learning algorithms to pinpoint — and sometimes reverse-engineer — the building blocks the brain uses to construct complex thoughts.
“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” said psychology professor Marcel Just, the lead author of the study, in a press release. “We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”
In the CMU study, the team was able to demonstrate different brain activations being triggered according to 240 complex events, ranging from individuals and settings to types of social interaction or physical actions. Using the smart algorithm, the team could discern what was being thought about at any given time — and even the order of a particular sentence. After training the algorithm on 239 of the 240 sentences and their corresponding brain scans, the researchers were able to predict the final sentence based only on the brain data. It was able to do this with an impressive 87 percent accuracy, as well as doing the opposite: being given sentence information and then outputting an accurate image of how the brain would be activated during that sentence.
We’re guessing this doesn’t bode well for our eventual face-off with Skynet!
A paper describing the work, titled “Predicting the Brain Activation Pattern Associated With the Propositional Content of a Sentence,” is published in the new issue of the journal Human Brain Mapping.
No more secrets! New mind-reading machine can translate your thoughts and display them as text INSTANTLY
By Danyal Hussain
on 31 March, 2018
Scientists have developed an astonishing mind-reading machine which can translate what you are thinking and instantly display it as text.
They claim that it has an accuracy rate of 90 per cent or more and say that it works by interpreting consonants and vowels in our brains.
The researchers believe that the machine could one day help patients who suffer from conditions that don’t allow them to speak or move.
The machine registers and analyses the combination of vowels and consonants that we use when constructing a sentence in our brains.
It interprets these sentences based on neural signals and can translate them into text in real time.
In fact, scientists claim that the machine can use words that it hasn’t even heard before.
Study leader David Moses told the Sun: ‘No published work has demonstrated real-time classiﬁcation of sentences from neural signals.
‘Given the performance exhibited by [the machine] in this work and its capacity for expansion, we are conﬁdent in its ability to serve as a platform for the proposed speech prosthetic device.’
There are fears from critics, however, that the device will cause problems if secret thoughts are exposed accidentally.
The device was developed at the University of California and explained in the Journal of Neural Engineering.
Scientists discover how to upload knowledge to your brain
By Mark Molloy
Published in Telegraph on 1 March, 2016
Feeding knowledge directly into your brain, just like in sci-fi classic The Matrix, could soon take as much effort as falling asleep, scientists believe. Researchers claim to have developed a simulator which can feed information directly into a person’s brain and teach them new skills in a shorter amount of time, comparing it to “life imitating art”. They believe it could be the first steps in developing advanced software that will make Matrix-style instant learning a reality. In the neo-noir sci-fi classic, protagonist Neo is able to learn kung fu in seconds after the martial art is ‘uploaded’ straight to his brain.
Researchers from HRL Laboratories, based in California, say they have found a way to amplify learning, only on a much smaller scale than seen in the Hollywood film. They studied the electric signals in the brain of a trained pilot and then fed the data into novice subjects as they learned to pilot an aeroplane in a realistic flight simulator. The study, published in the journal Frontiers in Human Neuroscience, found that subjects who received brain stimulation via electrode-embedded head caps improved their piloting abilities and learnt the task 33 per cent better than a placebo group.
“Our system is one of the first of its kind. It’s a brain stimulation system,” explained Dr Matthew Phillips. “It sounds kind of sci-fi, but there’s large scientific basis for the development of our system. “The specific task we were looking at was piloting an aircraft, which requires a synergy of both cognitive and motor performance. “When you learn something, your brain physically changes. Connections are made and strengthened in a process called neuro-plasticity. “It turns out that certain functions of the brain, like speech and memory, are located in very specific regions of the brain, about the size of your pinky.” Dr Matthews believes that brain stimulation could eventually be implemented for tasks like learning to drive, exam preparation and language learning. “What our system does is it actually targets those changes to specific regions of the brain as you learn,” he added. “The method itself is actually quite old. In fact, the ancient Egyptians 4000 years ago used electric fish to stimulate and reduce pain. “Even Ben Franklin applied currents to his head, but the rigorous, scientific investigation of these methods started in the early 2000s and we’re building on that research to target and personalize a stimulation in the most effective way possible. “Your brain is going to be very different to my brain when we perform a task. What we found is … brain stimulation seems to be particularly effective at actually improving learning.”
‘Humans will upgrade themselves continuously’
Bits of exoskeleton hanging by the front door will make you faster and stronger. Those who can afford it will have better eyesight and hearing.
- – Tamar Kasriel,
- founder and MD of Futureal
‘Poverty and hunger have been all but eliminated – by Uber’
Uber, the world’s premier logistics, transportation, and energy company, has entirely eliminated urban “food islands” in developed areas of the world.
- – Mark Drapeau,
- head of content, World Future Society; editor, The Futurist
‘You’ll be able to purchase high-quality emotions online’
Emotion-sharing experiences are the latest fad in 2045. Imagine your friend at Glastonbury can post a photo on Instagram and with it comes bundled a faint twinkling of what she was feeling right there in that moment.
- – Alex Ayad,
- head of Imperial College London’s Tech Foresight Practice
‘Your car will be able to read your feelings’
Machines will be able to sense and then adapt themselves to the emotional state of a user. If a car decides you are angry and in danger of driving unsafely, it might adapt itself to make things safer.
- – Richard Watson,
- futurist, writer, founder of online magazine What’s Next
This A.I. literally reads your mind to re-create images of the faces you see
By Luke Dormehl
Posted on https://www.digitaltrends.com
In a test, subjects were hooked up to EEG brainwave-reading equipment and shown images of faces. While this happened, their brain activity was recorded and then analyzed using machine learning algorithms. Impressively, the researchers were able to use this information to digitally re-create the face image stored in the person’s mind. Unlike basic shapes, being able to re-create faces involves a high level of fine-grained visual detail, showcasing a high level of sophistication for the technology.
While this isn’t the first time that A.I. has been used to read people’s minds, it’s the first time this has been achieved using EEG data. Previous studies involved fMRI technology, which measures brain activity by detecting changes in its blood flow. One of the most exciting differences between the two techniques is that EEG is far more portable, inexpensive, and can deliver greater levels of detail in mere milliseconds.
The technology could potentially be used by law enforcement for creating more accurate eyewitness reports about a potential suspect’s likeness. Currently, this information is relayed to a sketch artist through verbal descriptions, thereby potentially lowering its levels of accuracy. It might also serve as a way of helping people who lack the ability to communicate verbally. The EEG technology could be employed to produce a neural-based reconstruction of what a person is perceiving at any given time, as well as visualizing memories or imagination that let them express themselves.
In the future, the team hopes to build on this work by looking at how effectively they can reconstruct images with EEG data, based on a person’s memory of an event. They also want to move beyond faces to explore whether they can recreate accurate images of other objects.
A paper describing the work, titled “The Neural Dynamics of Facial Identity Processing: insights from EEG-Based Pattern Analysis and Image Reconstruction,” was recently published in the journal eNeuro.
Microchip Implants, Mind Control, and Cybernetics
by Rauni-Leena Luukanen-Kilde, MD, Former Chief Medical Officer of Finland,
Published in SPEKULA, 1999
Comment: This article was originally published in the 36th-year edition of the Finnish-language journal SPEKULA (3rd Quarter, 1999). SPEKULA (circulation 6500) is a publication of Northern Finland medical students and doctors of Oulu University OLK (Oulun Laaketieteellinen Kilta). It is mailed to all medical students of Finland and all Northern Finland medical doctors.
Microchip Implants, Mind Control, and Cybernetics was first posted at Illuminati News on December 6, 2000, and this is an unaltered repost of that same article. It is now a ‘classic’ on the subject of mind control and implants, and it needs to be read again, over and over, to remind us what is waiting if we don’t act. Wes Penre, www.illuminati-news.com
In 1948 Norbert Weiner published a book, Cybernetics, defined as a neurological communication and control theory already in use in small circles at that time. Yoneji Masuda, “Father of the Information Society,” stated his concern in 1980 that our liberty is threatened Orwellian-style by cybernetic technology totally unknown to most people. This technology links the brains of people via implanted microchips to satellites controlled by ground-based supercomputers.
The first brain implants were surgically inserted in 1974 in the state of Ohio, USA and also in Stockholm, Sweden. Brain electrodes were inserted into the skulls of babies in 1946 without the knowledge of their parents. In the 1950s and 60s, electrical implants were inserted into the brains of animals and humans, especially in the U.S., during research into behavior modification, and brain and body functioning. Mind control (MC) methods were used in attempts to change human behavior and attitudes. Influencing brain functions became an important goal of military and intelligence services.In 1948 Norbert Weiner published a book, Cybernetics, defined as a neurological communication and control theory already in use in small circles at that time. Yoneji Masuda, “Father of the Information Society,” stated his concern in 1980 that our liberty is threatened Orwellian-style by cybernetic technology totally unknown to most people. This technology links the brains of people via implanted microchips to satellites controlled by ground-based supercomputers.
Thirty years ago brain implants showed up in X-rays the size of one centimeter. Subsequent implants shrunk to the size of a grain of rice. They were made of silicon, later still of gallium arsenide. Today they are small enough to be inserted into the neck or back, and also intravenously in different parts of the body during surgical operations, with or without the consent of the subject. It is now almost impossible to detect or remove them.
It is technically possible for every newborn to be injected with a microchip, which could then function to identify the person for the rest of his or her life. Such plans are secretly being discussed in the U.S. without any public airing of the privacy issues involved. In Sweden, Prime Minister Olof Palme gave permission in 1973 to implant prisoners, and Data Inspection’s ex-Director General Jan Freese revealed that nursing-home patients were implanted in the mid-1980s. The technology is revealed in the 1972:47 Swedish state report, Statens Officiella Utradninger (SOU).
Implanted human beings can be followed anywhere. Their brain functions can be remotely monitored by supercomputers and even altered through the changing of frequencies. Guinea pigs in secret experiments have included prisoners, soldiers, mental patients, handicapped children, deaf and blind people, homosexuals, single women, the elderly, school children, and any group of people considered “marginal” by the elite experimenters. The published experiences of prisoners in Utah State Prison, for example, are shocking to the conscience.
Today’s microchips operate by means of low-frequency radio waves that target them. With the help of satellites, the implanted person can be tracked anywhere on the globe. Such a technique was among a number tested in the Iraq war, according to Dr. Carl Sanders, who invented the intelligence-manned interface (IMI) biotic, which is injected into people. (Earlier during the Vietnam War, soldiers were injected with the Rambo chip, designed to increase adrenaline flow into the bloodstream.) The 20-billion-bit/second supercomputers at the U.S. National Security Agency (NSA) could now “see and hear” what soldiers experience in the battlefield with a remote monitoring system (RMS).
When a 5-micromillimeter microchip (the diameter of a strand of hair is 50 micromillimeters) is placed into optical nerve of the eye, it draws neuroimpulses from the brain that embody the experiences, smells, sights, and voice of the implanted person. Once transferred and stored in a computer, these neuroimpulses can be projected back to the person’s brain via the microchip to be reexperienced. Using a RMS, a land-based computer operator can send electromagnetic messages (encoded as signals) to the nervous system, affecting the target’s performance. With RMS, healthy persons can be induced to see hallucinations and to hear voices in their heads.
Every thought, reaction, hearing, and visual observation causes a certain neurological potential, spikes, and patterns in the brain and its electromagnetic fields, which can now be decoded into thoughts, pictures, and voices. Electromagnetic stimulation can therefore change a person’s brainwaves and affect muscular activity, causing painful muscular cramps experienced as torture.
The NSA’s electronic surveillance system can simultaneously follow and handle millions of people. Each of us has a unique bioelectrical resonance frequency in the brain, just as we have unique fingerprints. With electromagnetic frequency (EMF) brain stimulation fully coded, pulsating electromagnetic signals can be sent to the brain, causing the desired voice and visual effects to be experienced by the target. This is a form of electronic warfare. U.S. astronauts were implanted before they were sent into space so their thoughts could be followed and all their emotions could be registered 24 hours a day.
The Washington Post reported in May 1995 that Prince William of Great Britain was implanted at the age of 12. Thus, if he were ever kidnapped, a radio wave with a specific frequency could be targeted to his microchip. The chip’s signal would be routed through a satellite to the computer screen of police headquarters, where the Prince’s movements could be followed. He could actually be located anywhere on the globe.
The mass media has not reported that an implanted person’s privacy vanishes for the rest of his or her life. S/he can be manipulated in many ways. Using different frequencies, the secret controller of this equipment can even change a person’s emotional life. S/he can be made aggressive or lethargic. Sexuality can be artificially influenced. Thought signals and subconscious thinking can be read, dreams affected and even induced, all without the knowledge or consent of the implanted person.
A perfect cyber-soldier can thus be created. This secret technology has been used by military forces in certain NATO countries since the 1980s without civilian and academic populations having heard anything about it. Thus, little information about such invasive mind-control systems is available in professional and academic journals.
The NSA’s Signals Intelligence group can remotely monitor information from human brains by decoding the evoked potentials (3.50HZ, 5 milliwatt) emitted by the brain. Prisoner experimentees in both Gothenburg, Sweden and Vienna, Austria have been found to have evident brain lesions. Diminished blood circulation and lack of oxygen in the right temporal frontal lobes result where brain implants are usually operative. A Finnish experimentee experienced brain atrophy and intermittent attacks of unconsciousness due to lack of oxygen.
Mind control techniques can be used for political purposes. The goal of mind controllers today is to induce the targeted persons or groups to act against his or her own convictions and best interests. Zombified individuals can even be programmed to murder and remember nothing of their crime afterward. Alarming examples of this phenomenon can be found in the U.S.
This “silent war” is being conducted against unknowing civilians and soldiers by military and intelligence agencies. Since 1980, electronic stimulation of the brain (ESB) has been secretly used to control people targeted without their knowledge or consent. All international human rights agreements forbid nonconsensual manipulation of human beings — even in prisons, not to speak of civilian populations.
Under an initiative of U.S. Senator John Glenn, discussions commenced in January 1997 about the dangers of radiating civilian populations. Targeting people’s brain functions with electromagnetic fields and beams (from helicopters and airplanes, satellites, from parked vans, neighboring houses, telephone poles, electrical appliances, mobile phones, TV, radio, etc.) is part of the radiation problem that should be addressed in democratically elected government bodies.
In addition to electronic MC, chemical methods have also been developed. Mind-altering drugs and different smelling gasses affecting brain function negatively can be injected into air ducts or water pipes. Bacteria and viruses have also been tested this way in several countries.
Today’s supertechnology, connecting our brain functions via microchips (or even without them, according to the latest technology) to computers via satellites in the U.S. or Israel, poses the gravest threat to humanity. The latest supercomputers are powerful enough to monitor the whole world’s population. What will happen when people are tempted by false premises to allow microchips into their bodies? One lure will be a microchip identity card. Compulsory legislation has even been secretly proposed in the U.S. to criminalize removal of an ID implant.
Are we ready for the robotization of mankind and the total elimination of privacy, including freedom of thought? How many of us would want to cede our entire life, including our most secret thoughts, to Big Brother? Yet the technology exists to create a totalitarian New World Order. Covert neurological communication systems are in place to counteract independent thinking and to control social and political activity on behalf of self-serving private and military interests.
When our brain functions are already connected to supercomputers by means of radio implants and microchips, it will be too late for protest. This threat can be defeated only by educating the public, using available literature on biotelemetry and information exchanged at international congresses.
One reason this technology has remained a state secret is the widespread prestige of the psychiatric Diagnostic Statistical Manual IV produced by the U.S. American Psychiatric Association (APA) and printed in 18 languages. Psychiatrists working for U.S. intelligence agencies no doubt participated in writing and revising this manual. This psychiatric “bible” covers up the secret development of MC technologies by labeling some of their effects as symptoms of paranoid schizophrenia.
Victims of mind control experimentation are thus routinely diagnosed, knee-jerk fashion, as mentally ill by doctors who learned the DSM “symptom” list in medical school. Physicians have not been schooled that patients may be telling the truth when they report being targeted against their will or being used as guinea pigs for electronic, chemical and bacteriological forms of psychological warfare.
Time is running out for changing the direction of military medicine, and ensuring the future of human freedom.
‘Matador’ With a Radio Stops Wired Bull
Modified Behavior in Animals Subject of Brain Study
By John A. Osmundsen
Reported in New York Times on 17 May, 1965
Afternoon sunlight poured over the high wooden barriers into the ring as the brave bull bore down on the unarmed “matador” — a scientist who had never faced a fighting bull. But the charging animal’s horns never reached the man behind the heavy red cape. Moments before that could happen, Dr. Jose M. R. Delgado, the scientist, pressed a button on a small radio transmitter in his hand, and the bull braked to a halt.
Then, he pressed another button on the transmitter and the bull obediently turned to the right and trotted away. The bull was obeying commands from his brain that had been called forth by electrical stimulation—by the radio signals—of certain regions in which fine wire electrodes had been painlessly implanted the day before.
The experiment, conducted last year in Cordova, Spain, by Dr. Delgado of Yale University’s School of Medicine, was probably the most spectacular demonstration ever performed of the deliberate modification of animal behavior through external control of the brain.
Dr. Delgado was trying to find out what makes brave bulls brave — just as other of his experiments have aimed at finding the biological basis for emotions, personality and behavior in man and other animals through electrical stimulation of their brains.
He has been working in this field for more than 15 years. Techniques that he and other scientists have recently developed have been refined to the point where, he believes, “a turning point has been reached in the study of the mind.”
“I do believe,” he said in a recent lecture, “that an understanding of the biological bases of social and antisocial behavior and of mental activities, which for the first time in history can now be explored in a conscious brain, may be of decisive importance in the search for intelligent solutions to some of our present anxieties, frustrations and conflicts.
Dr. Delgado said in an interview recently that he was particularly concerned with what he called the “gap between our understanding of the atom and our understanding of the mind.”
“We are in a precarious race,” he said, “between the acquisition of many megatons of destructive power and the development of intelligent human beings who will make intelligent use of the formidible forces at our disposal.”
Based on His Experiments
Dr. Delgado’s contention that brain research has reached a stage of refinement where it can contribute to the solution of some of these problems is based, he said, on many of his own experiments. These have shown, he explained, that “functions traditionally related to the psyche, such as friendliness, pleasure or verbal expression, can be induced, modified and inhibited by direct electrical stimulation of the brain.”
For example, he has been able to “play” monkeys and cats ‘like little electronic toys” that yawn, hide, fight, play, mate and go to sleep on command. And with humans under treatment for epilepsy, he has increased word output sixfold in one person, has produced severe anxiety in another, and in several others has induced feelings of profound friendliness—all by electrical stimulation of various specific regions of their brains.
The evocation of bodily responses from electrically stimulated brains goes back to the middle of the 19th century, when scientists produced limb movements and other reactions by applying weak electrical currents to the exposed brains of ‘anesthetized animals.
Emotions Were Inaccessible
One trouble with that sort of work, however, was that the animals were asleep, and thus many of the most important aspects of brain activity, such as emotions and intelligence, were inaccessible to study. This limitation was overcome at the turn of the century by the development of techniques to insert wires into the animal’s brain through an ivory plug screwed into the skull. This served as an anchor for the wires, which carried weak stimulating currents from dry cell batteries.
In 1932, Dr. F. R. Hess of Switzerland used a similar set-up to stimulate various cerebral regions in conscious cats. He showed that electrical currents could influence the animal’s posture, balance, movement and such basic psychic manifestations as fear and rage. For some still unexplained reason, those techniques were not used much by biologists until the early nineteen-fifties. Then important developments in brain surgery, psychosomatic medicine, psycho-pharmacology and physiological psychology turned the attention of scientists to electrical exploration of the brain.
Makes Use of Telemetry
Of all the scientists who are working in this area, however, Dr. Delgado appears to be the only one using radio to stimulate animals’ brains, with special attention to effects on social behavior. He also makes use of telemetry in studying physiological activity in brains and other organs.
“I do not know why more work of this sort isn’t done,” he remarked recently, “because it is so economical and easy.” Essentially, Dr. Delgado’s system for studying social behavior consists of constant time-lapse photography of animal colonies, the analysis of those films and recording of of all the animals, details in the behavior patterns.
This permits not just qualitative assessment of the animals’ social interactions but also the quantification of each one’s behavioral profile, Dr. Delgado said. This is particularly important when analyzing the modifications in social behavior of the group produced by radio stimulation of a particular response in one or more of the animals.
For example, stimulation of several specific regions of the brain can induce aggressiveness in a monkey. Having quantatative data on that animal’s behavior, as well as on that of others in the colony can reveal more precisely the magnitude, of various, sometimes subtle, effects of electrical stimulation on individual and collective social behavior.
Some of the Results Listed
With such techniques, Dr. Delgado has shown:
Monkeys will learn to press a button that sends a stimulus to the brain of an enraged member of the colony and calms it down, indicating that animals can be taught to control one another’s behavior.
A monkey, stimulated to extremely aggressive behavior, will make “intelligent” attacks only on competitive members of the colony, sparing other, friendlier, ones.
Monkeys and cats can be triggered into sequential behavior in which one might open its mouth, turn around, walk to a corner, climb a wall, jump down and return to “start,” repeating those movements in the same order every time they are stimulated, but they will modify the pattern if other animals get in the way or if they are threatened.
The latter two experiments, show that electrical brain stimulation does not simply evoke automatic responses but reactions that become integrated into social behavior according to the individual’s own personality or temperament, Dr. Delgado said.
Experiments have been conducted on human beings by Dr. Delgado and other scientists, primarily during the treatment of certain types of epilepsy. Stimulation of particular areas of the brain have produced anxiety, profound feelings of friendliness and, in one case, a sixfold increase in word output.
The Yale neurophysiologist believes that techniques such as the one he is using can lead to the discovery of the cerebral basis of anxiety, pleasure, aggression and other mental functions and that “we shall be in a much better position to influence their development and manifestation (in various ways); especially by means of more scientifically programed education.”
We are moving ever closer to the era of mind control
by Steven Rose
Reported in the Observer on February 05 2006 on p31 of the Comment section
Brain scientists are on a roll. Concern about rising levels of mental distress have resulted in unprecedented levels of funding in the US and Europe. And a range of new technologies, from genetics to brain imaging, are offering extraordinary insights into the molecular and cellular processes underlying how we see, how we remember, why we become emotional.
Brain imaging has become familiar. Scanners, known by their initials – CAT, PET, MRI – began as clinical tools, enabling surgeons to identify potential tumours, the damage following a stroke or the diagnostic signs of incipient dementia. But neuroscientists quickly seized on their wider potential. The images of regions of the brain ‘lighting up’ when a person is thinking of their lover, imagining travelling from home to the shops, or solving a mathematical problem, have captured the imagination of researchers and public alike. What if they could do more?
Recently I published the results of an experiment in which we looked at the regions of the brain that became active when people chose between competing products in supermarkets. Major companies, ranging from Coca-Cola to BMW, are starting to image the brains of potential customers to study how they respond to new designs or brands. They are beginning to speak of ‘neuromarketing’ and ‘neuroeconomics.’
Such trends may be relatively innocuous, but the increasing state interest in what the images might reveal is less so. Specifically, what if brain imaging could predict future behaviour, or indicate guilt or innocence of a crime? There are claims, for example, that it could reveal potential ‘psychopathy’, that the brains of men convicted of brutal murders show significantly abnormal patterns.
In the current legislative climate, where there have been attempts to introduce pre-emptive detention for ‘psychopaths’ who have not yet been convicted of any crime, such claims need to be addressed critically. They are and will be resisted by the judiciary, but recent developments suggest that this may be a frail defence against an increasingly authoritarian state.
More seriously, there is increasing military interest in the development of techniques that can survey and possibly manipulate the mental processes of potential enemies, or enhance the potential of one’s own troops. There is nothing new about such an interest. In the US, it stretches back at least half a century. Impressed by claims that the Soviet Union was developing psychological warfare, the CIA and the Defence Advanced Projects Agency (Darpa) began their own programmes. Early experiments included the clandestine feeding of LSD to their own operatives and attempts at ‘brain-washing’. These were the forerunners of the hoods and white noise used by the British in Northern Ireland – until judged illegal – and more recently in Abu Ghraib and Guantanamo, where they inhabit an uncertain borderline between what the US government regards as an acceptable level of violence and the torture that it denies committing.
By the 1960s, Darpa, along with the US Navy, was funding almost all US research into ‘artificial intelligence’, in order to develop methods and technologies for the ‘automated battlefield’ and the ‘intelligent soldier’. Contracts were let and patents taken out on techniques aimed at recording signals from the brains of enemy personnel at a distance, in order to ‘read their minds’.
These efforts have burgeoned in the aftermath of the so-called ‘war on terror’. One US company claims to have developed a technique called ‘brain- fingerprinting’, which can ‘determine the truth regarding a crime, terrorist activities or terrorist training by detecting information stored in the brain’. The stress of lying under interrogation is supposed to result in a specific wave form which electrodes measuring the brain’s fluctuating electrical signals can detect. We may be sceptical about the validity of such methods, but they indicate the direction in which research is heading. The company claims its procedures have been accepted in evidence in court in the US.
The step beyond reading thoughts is to attempt to control them directly. A new technique – transcranial magnetic stimulation (TMS) – has begun to generate interest. This focuses an intense magnetic field on specific brain regions, and has been shown to affect thoughts, perceptions and behaviour. There are suggestions it could be used to control obsessive-compulsive behaviour, while some even take seriously the scenario envisaged in the film Eternal Sunshine of the Spotless Mind, in which TMS was used to erase unwanted memories of a love affair gone wrong. Currently only possible if a subject’s head is put inside the relevant machine, TMS at a distance is now under active military investigation. So is chip technology, which might provide implanted prostheses to overcome sensory deficits or control behaviour, and whose potential bioethics committees around Europe have been scrutinising.
It is tempting to dismiss all these as technological fantasies and their proponents as sellers of snake oil, but the fact that a technology is faulty doesn’t mean it won’t be used. One only has to think of the tens of thousands of lobotomies carried out on schizophrenic patients in the past century. Britain is one of the world’s leading examples of a surveillance society, observing its citizens through CCTV cameras and controlling their behaviour with Asbos and Ritalin. The potential for surveillance of citizen’s thoughts has moved far beyond the visions of 1984
Science cannot happen without major public or private expenditure but its goals are set at least as much by the market and the military as by the disinterested pursuit of knowledge. This is why neuroscientists have a responsibility to make their subject and its potentials as transparent as possible, and why the voices of concerned citizens should be heard not ‘downstream’ when the technologies are already fully formed, but ‘upstream’ while the science is still in progress. We have to find ways of ensuring that such voices are listened through the cacophony of slogans about ‘better brains’ – and the power of the military and the market.
Activists Inform Canadian Journalists of Ongoing Neuro-Experimentation & Mind Control Projects While Irregularities in Rohinie Bisesar’s Court Case Mount
by Ramola D at Washington Blog on 14 May, 2016
Recently, human rights activists representing the Canadian Organization for Victims of Psychotronic or Mind Control Weapons, and World CACH (World Coalition Against Covert Harassment), made attempts to contact Rohinie Bisesar, the highly-educated MBA charged with first-degree murder in the unprovoked stabbing death of Rosemary Junor, a health-care worker and new bride, covered here earlier, and to inform journalists of ongoing neuro-experimentation and mind control projects on Canadian citizens.
Anti- Mind-Control Activists Seek to Support Rohinie Bisesar
On April 30, Galina Kurdina and Joshua Byer, activists with the Organization for Victims of Psychotronic Weapons, and representatives of World CACH, visited Vanier Center for Women, the jail where Rohinie Bisesar has been confined, after booking a meeting with her beforehand. On arrival at the jail, they were accosted by a woman prison guard who asked what organization they represented. When informed, she told them the meeting was cancelled. “We came to Vanier Center for Women to see Rohinie today at 6.15 P.M., but the meeting was cancelled. The guard said that Rohini’s safety, security, and mental health were in danger by allowing her to talk to us. The guard acted against us and Ms. Bisesar.”
Ms. Kurdina and Mr. Byer had sought to relay a message of support and confirmation to Ms. Bisesar, who has said openly in court that she did not need a psychiatric assessment but a body scan, and that she believed she had been the subject of “experiments which had gone wrong.” She has also stated that she felt “something foreign had been put in (her) mind,” and had tried earlier to research nanotechnology and Artificial Intelligence.
Galina Kurdina, who is well-known as an activist in Canada who has repeatedly petitioned the Canadian government and approached Canadian media to seek investigations into non-consensual brain experimentation, as well as launched lawsuits against mis-diagnosing psychiatrists, states that she, like many other Canadians, as also Americans, Europeans, Asians, Australians, and others worldwide, has experienced the identical symptoms that Rohinie Bisesar describes, and that they are the hallmarks of covert neuro-experimentation and mind control projects, not mental illness.
Similarities to Rohinie Bisesar’s Case
In reports strikingly similar to Rohinie Bisesar’s, Galina Kurdina notes that she too has experienced “microwave hearing,” a patented technology to put voices inside people’s heads, and that one of the messages she received from the neuro-experimenters via microwave hearing was that “they were going to create a hybrid of a computer and a human being in order to suppress (her) will and manipulate (her) as a bio robot.”
In an affidavit, she reports experiencing various effects such as pressure on the forehead destroying concentration, triggered actions and a subsequent deletion of memory of them, and pressure on the crown inducing a hypnotic state.”I get a non-stop stream of words, commands, impulses, pictures…I am manipulated in this state 24/7, like a computer device, compelled to say or do something I do not want to say or to do.” She reports “forced speech, involuntary body movements, induced actions, change of facial expressions, tone and pitch of the voice, artificial change of emotions and desires, artificial laughter and tears.”
Letter to Rohinie Bisesar
In a letter addressed to Rohinie Bisesar and delivered to the Toronto office of Calvin Barry, Bisesar’s recently-fired lawyer, Galina Kurdina reports that members of the organization she represents are subjected to electronic harassment and mind control influence:
“Symptoms of mind control are voices in our heads, visual hallucinations, induced thoughts, forced speech (people say, what they do not want to say), involuntary body movements, induced actions (people do, what they do not want to do), sleep deprivation, cramps, seizures, artificial pain in different body parts and many other symptoms. We offer you our help, and if you want, we can testify about electronic harassment in the criminal court. You can contact me directly or through your lawyer.”
She goes on to note that she herself has been toxicologically tested and been found to have nano-materials stated by her toxicologist to be functioning as chips to send signals in her body, and could provide medical records “from a toxicologist Dr. Staninger (to prove) that she is a victim of specific electromagnetic effects” as well as records “from a private investigator Melinda Kidder, affidavit from a hypnotherapist June Steiner, letter/petition of our Organization, and (her) testimony.”
Since Calvin Barry is no longer Rohinie Bisesar’s lawyer, and since the activists were explicitly prevented from meeting directly with Rohinie at the Vanier jail, it is not known whether Rohinie Bisesar has received this letter yet. Galina Kurdina, who has been seeking also to contact Rohinie’s family, states that she hopes to mail her letter and documents this week to Vanier Center for the attention of Rohinie Bisesar.
Notable Irregularities in Rohinie Bisesar’s Court Case
Old City Hall, Toronto
As recently reported in the National Post, Rohinie Bisesar has been subject to 15 court experiences since December 15 when she was apprehended and transported to Vanier Jail, and hospitalized briefly during this time after occasions in court when she spoke up for herself, saying her lawyer was not representing her as she wished, and asking for a medical body scan, expressions that have been characterized in corporate newspaper and TV reports as “bizarre,” “paranoid,” and “incoherent rants” on “conspiracy theories,” while her then-lawyer stressed her mental instability and requested a psychiatric assessment order by the court, which was eventually granted and completed. This assessment by the Center for Addiction and Mental Health (CAMH) was intended to address whether she can be found mentally fit for trial–or would need psychiatric treatment to be deemed fit to stand for trial–which is different from the court finding after trial of Not Criminally Responsible in the case of mental incompetence.
Her requests for a physical body scan and mention of brain or mind experiments on her appear to have been ignored in this rush by media, lawyers, prison psychiatrists, and the court system to characterize her essentially as mentally ill.
To add, in none of her court visits, either in person or by video link from the jail, was Rohinie Bisesar permitted to be photographed or videographed. News reports of the court appearances in Toronto media offer an artist-depicted portrait of Rohinie Bisesar in key moments of her speaking-out, no photos, and no snippets of video. According to reports, a ban was placed on all visual imagery/filming of the proceedings inside the court. This in itself must be remarked to be highly unusual, given that court appearances of suspected murderers often function as public spectacle.
No bail hearing was held and no pre-trial hearing dates set. From April 22 onward, her appearances were moved to Mental Health Court at Old City Hall.
It must also be noted that after the stabbing victim Rosemary Junor’s death, Rohinie Bisesar was initially charged with second-degree murder, a charge which was moved up later by police to first-degree murder, which requires evidence of calculation and planning, even though all reports indicate that the victim was not known to her, and no evidence has been presented to indicate such.
Further, the recent psychiatric assessment by CAMH has not been shared with media as per reports. A publication ban was declared on this assessment by the court. (Whether this will be lifted and the contents made public remains to be seen.) Interestingly, the ban also included the day’s proceedings, including why Bisesar fired Barry–but perhaps we have an indication from this reportage as to what her reasons may be.
Are efforts being made currently to screen Rohinie Bisesar from public view and to withhold key information from the public during the course of her trial and court case?
Could it be that her verbal self-defence and declarations of being a subject of brain experimentation are being deliberately held back from public scrutiny because they might represent an eloquent, articulate affidavit of experience from a victimized individual rather than a “bizarre” “rant” by a mentally-ill person as characterized by media? And point to an actuality of covert neuro-experimentation and mind control that is currently being experienced and reported on eloquently, and articulately, by thousands, worldwide? An actuality continuously ignored, and often mis-represented as “conspiracy theory,” it must be noted, by controlled establishment media.
Activists Contact Reporters With Information on Mind Control Experimentation
A small group of activists who report subjection to electronic harassment, mind control, and neuro-experimentation with psychotronic weapons held an awareness-raising demonstration in front of Old City Hall where Rohinie Bisesar’s most recent court appearance was slated to be held, on May 4, and spoke to a Bell Media reporter about covert neuro-experimentation. No mention of their presence or information they provided however showed up in later television coverage.
Other activists, including this writer, contacted reporters at the Toronto Sun, Toronto Star, Toronto Life, and the National Post by email and offered links to websites and books covering the current-day phenomenon of global non-consensual neuro-experimentation projects by military and Intelligence agencies, seemingly continuing the intrusive mind control experiments of the CIA’s MK ULTRA.
Activists requested that reporters investigate further:
“Please understand that we are currently living in an extraordinary time period, when citizens worldwide…are being non-consensually experimented on, in covert projects, with covert technologies. Microchips, neuroimplants, Deep Brain Stimulation, Remote Encephalograms, Remote Neural Monitoring, and Directed-Energy Weapons are being tested and operated on citizens.” –This writer.
“We are concerned citizens around the world that know chipping “to read/write/change/cause human mind/body/behavior (changes) of persons” as a fact and not something that needs psychiatric evaluation, but a radiologist’s report on the basis of an MRI/CT/X-ray.” –Suja Vijayan, Director, World CACH.
“We have read your article about Rohinie Bisesar in the National Post and want to say that her complaints about experiments upon her may be real. We are writing you…to ask for your help for the many constituents in Canada who are being targeted unjustly or used as human subjects in experiments without their informed consent. We, and many others, are being tortured and mutilated in a mind/body control concentration camp 24/7. We wouldn’t be tortured and manipulated in a properly functioning legal system/society.” –Galina Kurdina, Activist, Organization of Victims of Psychotronic/Mind Control Weapons.
In her letter to reporter Aileen Donnelly of the National Post, Galina Kurdina pointed out the severity of human rights violations being inflicted on non-consenting subjects:
“Victims of Psychotronic (or Mind/Body Control, Electronic, Directed Energy, Neurological, Non-lethal) weapons detail the most extreme and totalitarian violations of human rights in human history, including the most horrendous incidents of psychological torture, mental anguish and physical mutilations. Criminals may implant people with microchips or nano-materials and place them under continuous surveillance, no matter where they are. They monitor the human brain, including thoughts, reactions, motor commands, auditory events and visual images. They continuously alter consciousness, introduce voices, noises, commands, images, “dreams”, and other disturbances into the brain. They directly abuse, torture and assault our bodies – including performing advanced medical procedures from remote locations.”
Among the effects of remote accessing of human brains she described are: “Manipulation of human behavior: forced speech, involuntary body movements, induced actions, transmission of specific commands into the subconscious, compulsory execution of these commands. Reading thoughts remotely, retrieving memories, implanting personalities. Debilitation of mental acuity.”
Suja Vijayan, Director of World CACH, stressed the importance of publicizing this information on neuro-experimentation in the case of Rohini Bisesar, because of the great need to expose the true perpetrators behind this and possibly many other crimes committed by mentally-steered mind-control victims:
“This could also be, very well, a precedent where actual culprits who ‘steer people’ to make them commit acts of crime or modify their behavior are punished, instead of the victims themselves. If this is something that her lawyer knows and is complicit in suppressing/silencing, she is in the wrong hands of a lawyer who would work against her instead of for her.”
This writer also recommended that reporters relay the need to take Rohinie Bisesar’s requests seriously:
“Since Rohini Bisesar has claimed she has implants, that she is not mentally ill, but that she needs to be scanned, we believe that she must indeed be scanned thoroughly. The chimera of mental illness has been used dishonorably by the legal system–law enforcement being led and driven by corrupt Intel agencies who are engaging in these programs of electronic harassment and protecting military and Intel programs of neuro-experimentation–to discredit, dismiss, and marginalize the victims of covert experimentation, in order primarily to suppress public awareness.
“All of this is currently being exposed.”
The most powerful and eloquent articulation of malfeasance at the government, media, and judicial system level was voiced by Galina Kurdina:
“We have contacted Parliament, Courts, RCMP (Royal Canadian Mounted Police), police, Security/Intelligence Agencies and other Government institutions over and over and over again, but have had our appeals for assistance and protection almost completely ignored or suppressed. The government “doing nothing” in this situation is a form of sanctioning these horrendous, fascist mind/body control experiments on innocent and defenseless people. That is why the Canadian Government is responsible for these crimes. The government of Canada must uphold the rule of National, International, and Human Rights Law and protect Canadian citizens by these laws. Instead we see the huge disconnect between its professed principles and values and the reality.
“It is our responsibility to record and alert the world to these horrendous crimes and the extreme danger that these technologies pose to democracy, human rights, privacy, mental and physical freedom, and the health of all people. These are the most horrendous weapons and crimes imaginable, and the people, using them, are mass-murdering conspirators, pursuing fascist, totalitarian, fundamentalist schemes.
“Doctor Joseph Mengele and other Nazis, who started developing these techniques in concentration camps during World War II, were brought over here from Europe after the war to continue their atrocities. MKULTRA was the first of these illegal and immoral experiments with unwilling victims. We are the latest victims. THIS MUST STOP, and the people responsible exposed and brought to justice for these unspeakable crimes against humanity.”
Controlled Media Versus An Educated Public
Reporters to whom these letters were sent have not responded with coverage of neuro-experimentation and mind control yet. Operation Mockingbird, the CIA’s infiltration of all media, may well be in effect still in Canada, as it seems to be in the United States as well–given US mainstream media reticence in discussing shadowy DOD/CIA mind control projects, which many thousands of Americans are reporting today.
If indeed, as Galina Kurdina’s information, along with current media blackouts, bans, and gag orders at Rohinie Bisesar’s trial suggest, there is a deep-set conspiracy among government, media, and law enforcement in Canada to keep neuro-experimentation and mind control projects secret, how long do they believe this information can be kept from an educated public?
Sooner or later, lawyers, judges, and the entire judicial system, including the Mental Health sectors of the court system, will have to wake up. Psychiatrists worldwide will need to start talking candidly with neuroscientists, and hammer out the truth in every single case of attributed mental instability: mental illness or neuro- experimentation?
As Suja Vijayan notes, Rohinie Bisesar’s case could well be the pivotal case when an educated public wakes up to the reality of covert neuro-experimentation in our midst and, instead of blaming and convicting the victim, puts “the real culprits,” the immoral experimenters, on trial.
Extensive documentation of scientific interest in neuro-experimentation as well as coverage of Intelligence agency activities in mind control projects in books, videos, and articles may be found at this Canadian site and in this pdf.