NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Machine translation of cortical activity to text with encoder–decoder framework (nature.com)
bognition 1492 days ago [-]
Really cool to see progress made here but this won't be available for public use any time soon (likely decades).

One of the biggest challenges with decoding brain signals is getting a large number sensors that detect voltages from a very localized region of the brain. This study was done with ECoG (Eletro-Cortico-Gram) which involves implanting small electrodes directly on the surface of the brain. Nearly all consumer devices use EEG (Electro Encephelo Gram) which involves putting sensors on the surface of the skin.

Commercially available ECoG is highly unlikely as it requires an extremely invasive brain surgery. For ethical reasons the implants from the study we're likely implanted to help diagnose existing life threatening medical issues.

Decoding speech from EEG won't work as well as ECoG a number of reasons. First the physical distance between the sensors and the brain Means the signals you pick up aren’t localized. Second the skin and skull are great low pass filters and filter out really interesting signals at higher frequencies, 100-2KHZ. Additionally these signals have a really low signal power because they're correlated with neuronal spiking.

ECoG does a really good job picking on these signals because the sensor is literally on the surface of the brain. Its really hard to pick up these signals reliably with EEG.

hughw 1492 days ago [-]
Seismologist here. The problems you describe sound reminiscent of those in seismic imaging. Ideally you'd bury geophones below the weathered layer (typically 3 m) at the surface, to get ideal coupling. In practice that's not economic at scale, and so you plant them on the surface, and account for the non-linear wave transmission through the weathered layer by clever math and by collecting more samples.

There's been a forty-year evolution in these techniques. The cheap, noisy technique might prevail if scientists keep refining the craft by tiny improvements.

stanfordkid 1492 days ago [-]
I looked into the problems in this field in the past, my impression is that the difficulty lies in how the signal diffuses through the skull and top-most layers of the brain. It’s all about resolution, and this sounds like a very similar problem. The question always is how much non-linearity is there, and what’s the frequency and density of the sensors relative to how often you have to sample to characterize it. Very excited to see how the field develops!
kensai 1492 days ago [-]
Neurologist here. There is an inherent problem of physics. The skull acts as a low-pass filter of the brain's activity. You lose a lot of information if you don't go under the skull.

I don't think EEGs can really give the spatial and temporal resolution we really need to extract the necessary information for thought decoding (or encoding).

dnautics 1492 days ago [-]
you don't need to decode thought, you just need to decode information at some really low bitrate to export information (say text).

Working with a NN, the brain can probably negotiate the relevant parameters (let's say frequency modulation across one channels with four bins, across 4 spots on the brain), that's 4^4 = 64 values across whatever timebox of resolution you get. That's enough to encode english letters, which itself probably is an underprovisioned mechanism of data transfer).

smallnamespace 1492 days ago [-]
Give the user real time feedback from the reader NN.

I wouldn’t be surprised if the biofeedback allows the user to retrain their brain to encode information in low frequency signals—the raw channel capacity necessary to transmit text is quite low, plus the reader NN can also use context if hooked up to a recurrent submodel.

youareostriches 1492 days ago [-]
I’m curious of your opinion on Openwater: https://www.openwater.cc/technology
2008guy 1492 days ago [-]
No, actually high density electrode implants are right around the corner. Watch the neuralink press event.
empath75 1492 days ago [-]
> One engineering challenge is that the brain's chemical environment can cause many plastics to gradually deteriorate. Another challenge to chronic electrode implants is the inflammatory reaction to brain implants. Transmission of chemical messengers via neurons is impeded by a barrier-forming glial scar that occurs within weeks after insertion followed by progressive neurodegeneration, attenuating signal sensitivity. Furthermore, the thin electrodes which Neuralink uses, are more likely to break than thicker electrodes, and currently cannot be removed when broken or when rendered useless after glial scar forming.

Yeah, you're not putting that in my brain.

https://en.wikipedia.org/wiki/Neuralink

2008guy 1492 days ago [-]
The inflammation and damage is seen in traditional arrays that are large and rigid. A thread with low enough moment of inertia will likely not cause as much damage. And the damage is only important if you put the electrodes in an important place... we currently screw two giant lag screws into people’s heads and call it “deep brain stimulation” so I feel optimistic about the long game. But you’d still be right to be wary. And if I had it my way, this technology would never be allowed to exist at all...
stainforth 1492 days ago [-]
Can you continue about your opposition to it existing?
salawat 1492 days ago [-]
It's a social crisis waiting to happen, especially if you can end up decoding more than just what someone uses their speech centers to articulate.

This is invasive to the extreme, and seems to open the door for violations of people's intimate thoughts down the road.

You may not think about it much now, but if you pay any attention to things like intrusive thoughts, or even have to deal with carefully maintaining a public face in the workplace, it should not be difficult to realize why these technologies are legitimately dangerous even as read only systems.

The real nightmare begins when you finally get fed up with Read-Only and figure out how to write in order to potentially mutate mental state.

I'm normally pretty forward-thinking in terms of embracing the March of technological progress. However, the last decade or so has shown we as a society have had our grasp exceed our socio/ethical/moral framework for using it responsibly; and the potential abuse a full read/write neural interface would enable is one of the few things that has managed to attain a "full-stop" in my personal socio-ethical-moral framework.

Not to sound like that an adult, but we're just not ready.

Before anyone points out that the same moral outrage probably occurred with the printing press; there is a big damn difference between changing someone's mind through pamphlets, and having a direct link to the limbic system to tickle on a whim. We do a very bad job of correctly estimating the long-term effects of technological advancement; just look at how destructive targeted advertising has been.

I haven't reached my conclusion on an existing preconception/predisposition either. I used to be massively for this particular advancement. Only through a long time spent reflecting on it has my viewpoint done a 180.

I'm aware of all of the positive applications for the handicap, brain-locked, and paralyzed; but I'm still reluctant to consider embracing it for their sake when I've seen how prone to taking a crowbar to a minor exception/precedent our legal system is.

Maybe I've just been in the industry long enough not to trust tech people to keep society's overall well-being and stability at heart. Maybe I'm becoming a luddic coward as I get older. I don't know, and I ask myself if I'm not being unreasonable every day. The answer hasn't changed though in a long while, even though I do keep trying to seek out opportunities to challenge it.

I hope that helps, and doesn't make me sound like too much of a nut.

IdiocyInAction 1492 days ago [-]
> Before anyone points out that the same moral outrage probably occurred with the printing press; there is a big damn difference between changing someone's mind through pamphlets, and having a direct link to the limbic system to tickle on a whim.

I've recently read a short story from Ted Chiang likened the development of writing to a fundamental cybernetic enhancement of the brain. I found it to be quite enlightening, as I never thought of how writing changes how we see ourselves and the environment. Our memories are imperfect and inaccurate and amplify biases we have, while writing loses much less information.

> just look at how destructive targeted advertising has been

Can you elaborate? Targeted advertising doesn't even make my top 100 of destructive technologies.

dandelo1953 1492 days ago [-]
Instead of thinking of advertising as "technology" you might want to look into the military-esque research that brought it into the free market. Just like the internet, psyops was first destructed and formalized by people that value information over influence. As only one will beget the other with any statistical certainty.
salawat 1491 days ago [-]
Targeted advertising in my lifetime has gone from staging ads for kitchen appliances on daytime TV channels or in magazines for housewives to a bunch of friends getting together with their cell phones, talking about a one off odd topic, and then finding ads pop up on that topic in the next week.

To clarify: we've gone from general audience profiling, to employment of broadband sensors for surreptitious collection of data from which to make ad serving decisions. There also exist patents for installing microphones for responding to user's screaming a brand name at a TV to skip a commercial, and the practice of frame sampling of viewed content from SmartTV's. These intrusions into personal privacy come purely for the benefit of forwarding the interests of these ad servers, which also creates a vulnerability in terms of the fact that your digital footprint is available to anyone else interested in paying or requesting to be able to use it. You can't have that granular ad targeting without implementation of further surveillance capabilities.

Furthermore, there are additional consequences in that filter bubbles are created. Without you being aware, the advertising industry by default will attempt to skew your overall experience toward what they think you want to see rather than what is actually out there, or what you ask for. These algorithms allowed to run unchecked, without instilling an innoculative knowledge of the tendency of these systems to shepherd one right off the reservation given enough time, leads to things where we throw around phrases likening our society to being "post-truth", and have actually recorded multiple instances of widespread population level sentiment engineering.

So we"e garbage binned any semblance of common worldview, and invited Orwellian tiers of data collection into our lives so that other people can stand a chance at maybe serving us an ad we weren't even actively looking for in the hopes of modifying our behavvior to make a purchase happen so that they can generate revenue off of our eyeballs and content creation.

Make no mistake. Targeted advertising is a blight. It's one of those things that sounds reasonable, innocent, and possibly even helpful on the surface; but quickly sours once you start digging into the details that make it happen.

I understand some people may feel they get value out of such an arrangement; that having that ad pop up at that time genuinely makes their life easier. I ask the followi ng, however: has an ad ever taught you anything that dedicated research, and purposeful exercise of your will to purchase couldn't teach you? Has your experience searching and trying to share information online not been adversely effected in that all people's searches of the same terms have no real consistent base anymore? The answer for me in both cases is "no". Throw in the fact that if I don't regularly clean out every last trace of client side state, my wanderings through cyberspace are painstakingly mapped and integrated by an industry hell bent on coaxing Every last shred of potential value out of my mere existence with no regard for the dangers of accumulating all that data in one place.

Nowadays, you have rumblings that we should be using these technical solutions as the basis of social/political policy, and half the people making the assertion one way aren't looking at the whole picture.

I don't want the world to time-freeze at early 2000's technology by a long shot. Let me be clear on that. I do however, believe we need to seriously take a look at our capabilities, and work on creating a cohesive, widespread set of ethic/moral dicta that jive with what we purport our most valued cultural aspects are as a society. Yes, I understand that may mean converging to things I don't agree with; and that's fine. I just want as many people as possible to have the whole picture; and I don't think that right now that is actually the case.

Also, see the information warfare post from a sibling poster. Information, and tactically imposed voids of information are just as weaponizable as any object. Over longer timescales, no doubt. Still viable though.

2008guy 1492 days ago [-]
The human brain doesn’t change on a human timescale. Where there is variability there is natural selection. Do you think the world will select for friendliness?
Balgair 1492 days ago [-]
> A thread with low enough moment of inertia will likely not cause as much damage.

Shear forces cause glial scarring?

2008guy 1492 days ago [-]
Yes
bognition 1492 days ago [-]
The limiting factor here isn't implant technology. The limiting factor is the rate we can implant electrodes and do new science.

Don't get me wrong, I love Elon as much as the next fanboy but there are serious ethical implications at play here that you can't just engineer your way around.

Ajedi32 1492 days ago [-]
What Neuralink has done so far is pretty impressive, but I think "decades" is still a pretty reasonable estimate for when that technology will be available for use cases that aren't medically necessary.
turingbike 1492 days ago [-]
There was a great article on the front page yesterday, When to Assume Neural Networks Can Solve a Problem, https://news.ycombinator.com/item?id=22717367 . Case 2 is very relevant here, basically: if you have solved the problem with access to lots of data, usually you can adapt to a lower-data regime.

I am way out of my element talking about brain surgery and sensors. However, one thing that I do well is say "you shouldn't bet against neural networks", which is a great way to be right on a few-year time horizon.

lioeters 1492 days ago [-]
I've been fascinated by this topic since a couple decades ago, when I participated in a research study involving EEG and word recognition. Progress seems to be slow but steady, with applications getting more accurate and practical.

Your point about invasive implants being impractical for commercial use made me wonder.. I searched the first phrase that popped into my head, "Non-invasive Brain-Computer Interface". Looks like there's promising research on significantly improving sensitivity/resolution of EEG singals.

- First Ever Non-invasive Brain-Computer Interface Developed - https://www.technologynetworks.com/informatics/news/first-ev... (2019)

- Noninvasive neuroimaging enhances continuous neural tracking for robotic device control - https://robotics.sciencemag.org/content/4/31/eaaw6844

Still, your prediction of "likely decades" sounds realistic. I'm hoping for an affordable, non-invasive brain-computer interface to be as widely used as the keyboard, mouse, or microphone.

dnautics 1492 days ago [-]
I don't think this is the first ever non-invasive bci. I recall decades ago there was a BCI where you could control between left and right literally by activating the appropriate hemisphere and triggering a relative potentiometer.

Extremely crude, but it worked. IIRC the instructions were "to go left, think 'go left'"; "to go right, relax". After some number of sessions (I think I remember it being "a lot") the user's brain would automatically do the correct thing.

PetitPrince 1492 days ago [-]
> Its really hard to pick up these signals reliably with EEG.

If I may add : EEG is also a pain to put in place (you have to put gel, correctly put the cap etc.) and it's very easy to pollute your signal by merely moving a bit or even blinking.

king07828 1492 days ago [-]
Has there been any work to use a neural network to generate/simulate ECoG signal output from an EEG signal input? (My Google-fu only gives definitions and distinctions for ECoG and EEG). Almost sounds similar in concept to deep learning super sampling (DLSS), i.e., taking a low resolution image/signal (EEG) and using a neural network to generate/simulate a high resolution image/signal (ECoG).
raverbashing 1492 days ago [-]
The problem sounds similar but it's completely different

DLSS kinda works because we "know" what is what thing in a photo.

EEG to ECOG would be trying to figure out a painting (that could be anything, from any painter) from a significant distance behind a frosted glass

king07828 1492 days ago [-]
> DLSS kinda works because we "know" what is what thing in a photo.

I thought the reason DLSS works is because the same rendering algorithm is used to generate the low resolution image and the high resolution image and the neural network merely learns a filter between the two.

Take a patient with ECoG implant(s), put EEG sensors on the patient, and hit record. You now have the same rendering mechanism (the brain) generating a low resolution signal (EEG) and a high resolution signal (ECoG).

However, back to DLSS, if the low resolution signal is a single pixel, then generating a 4k image from just that single pixel may not be very fruitful.

Still, it would be interesting to see an attempt at using a generative adversarial network (GAN) to generate an ECoG from an EEG. And if it doesn't work, then make a determination of how much more EEG sensitivity is needed before it will work.

antupis 1492 days ago [-]
Stupid question but could you do something very simple like on/off-switch with EEG and something similar than what they have done here?
bognition 1492 days ago [-]
Not sure how that would change anything. Think of the brain as a crowd at a soccer field and to decode the brain you'd need a mic on each person to capture every conversation. ECoG is like having a mic shared between every 100 people. While EEG is like having a single mic that is 10K feet over the soccer stadium.
posterboy 1492 days ago [-]
This is my new favorite metaphor, if it's an analogy to how people sing, and how a person can sing somewhat randomly (free) yet controlled. The fact that there are two sides to the party chanting against each other, and the stadion speaker blurting over it occasionally, makes it so much better.

Corallary, you need two microphones at least for simplicities sake, although the repetitivenes of the chants makes it a little easier.

Most likely this will have good use in aphasia research.

garethsprice 1492 days ago [-]
Muse headset uses the "on/off" signal for meditation training. Has a fun stream API you can play with.
gridlockd 1492 days ago [-]
Yes, see EPOC
HPsquared 1492 days ago [-]
What's your opinion on Neuralink?
bognition 1492 days ago [-]
I think they're a cool company and I even thought about applying. However the limiting factor here isn't the implant technology. Lots of groups have been building really great stuff for years (groups around the world have been recording from neurons in living animals for decades). The issue is there are ethical implications for performing such an invasive brain surgery on a healthy person. Right now the limit factor is finding suitable candidates.

The ECoG implants are usually done to pinpoint where seizers start in the brain. The surgeons already have a good idea of the rough area from EEG but to zero in on the exact locus they need ECoG.

So to find a candidate for a study you need someone who has epilepsy, their epilepsy needs to be bad enough to merit brain surgery, and the epileptic center cannot be the brain region you care about but it needs to be close enough to your target region that the same ECoG will cover both brain areas.

So again I'm all for pushing this science forward. The more we learn about how the brain works the more we'll understand what makes us human. However, this isn't a technology problem right now. Its an ethical and medical one.

Ajedi32 1492 days ago [-]
> However, the limiting factor here isn't the implant technology. [...] The issue is there are ethical implications for performing such an invasive brain surgery on a healthy person.

That sounds like a limitation of the implant technology to me. The reason there are ethical problems with performing invasive brain surgery on a healthy person is because the risks and downsides of currently available implant technology are significant. If getting a brain implant were as cheap, easy, and safe as getting a tattoo, the ethical problem would be largely solved.

BickNowstrom 1492 days ago [-]
There are quite a few ethical problems. On the reading side, there is interrogation (of suspected spies or terrorists), which is already decades underway with fMRI, and brainjacking (a keylogger for your thoughts). As for deep brain stimulation, this is researched to create fearless soldiers, change sexuality, extremely painful torture, radio control movements, and alter emotional states.

Even for patients who suffer from severe health conditions such as OCD, depression, or tremors, the treatment can be very disruptive and emotional well-being may decrease, even with successful treatment of the condition (the doctor is happy, the patient less so).

The dual use possibilities of this technology are extremely scary, and the involvement of Facebook (supposedly for VR) and DARPA (supposedly to treat anxiety and PTSD in soldiers) does not bode too well.

mattkrause 1492 days ago [-]
Got some references for "decades" of fMRI interrogation (or, really, anything else)?

As far as I know, decoding is challenging with totally cooperative subjects doing simple tasks. Wiggling just a few mm is enough to completely destroy a run.

BickNowstrom 1492 days ago [-]
The brain can't lie (2003)

> Brain scans can reveal how you think and feel, and even how you might behave. No wonder the CIA and big business are interested.

https://www.theguardian.com/science/2003/nov/20/neuroscience...

Technology vs. Torture (2004)

> Indeed, a Pentagon agency is already funding Functional MRI research for such purposes.

https://slate.com/culture/2004/08/how-technology-will-elimin...

The Legality of the Use of Psychiatric Neuroimaging in Intelligence Interrogation (2005)

> For example, an interrogator could present a detainee with pictures of suspected terrorists, or of potential terrorist targets, which would generate certain neural responses if the detainee were familiar with the subjects pictured. U.S. intelligence agencies have been interested in deploying fMRI technology in interrogation for years. It now appears that they can.

https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?arti...

Zero-Shot Learning with Semantic Output Codes (2009)

> As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.

https://www.cs.cmu.edu/afs/cs/project/theo-73/www/papers/zer...

Anecdotal hearsay: I first heard of a brain reading helmet able to successfully reconstruct a numerical password on subjects where the helmet was not trained on (but who consciously had to think about the keycode) in 2001. I also heard this technology was used extensively in Guantanamo Bay and black sites, possibly as a cheap trick to intimidate prisoners into speaking the truth / making them visibly anxious to lie, such tricks dating back to the world wars where they used sedatives and/or uppers disguised as "truth serums", and even the threat of administering these caused subjects to crack.

As for unethical deep brain stimulation research see:

- Assessment of Soviet Electrical Brain Stimulation Research and Applications (1975) https://www.cia.gov/library/readingroom/docs/CIA-RDP96-00792...

- Robert Galbraith Heath (1953+) https://en.wikipedia.org/wiki/Robert_Galbraith_Heath

> Dr Heath's work on mind-control at Tulane was partly funded by the US military and the CIA. Dr Heath's subjects were African Americans. In the words of Heath's collaborator Australian psychiatrist Harry Bailey, this was "because they were everywhere and cheap experimental animals". Following the discovery by Olds and Milner of the "pleasure centres" of the brain [James Olds and Peter Milner, "Positive Reinforcement Produced by Electrical Stimulation of the Septal Area and Other Regions of the Rat Brain," Journal of Comparative and Physiological Psychology 47 (1954): 419-28.], Dr Heath was the main speaker at a seminar conducted by the Army Chemical Corps at its Edgewood Arsenal medical laboratories. Dr Heath's topic was "Some Aspects of Electrical Stimulation and Recording in the Brain of Man." Details of Dr Heath's own involvement in the MK-ULTRA project remain unclear; but Tulane University continues to enjoy close ties with the CIA. Dr Heath also conducted numerous experiments with mescaline, LSD and cannabis.

comex 1492 days ago [-]
None of your references say that anyone has used fMRI for interrogation, let alone for decades. At most, your first two links say that the CIA and DoD respectively are interested in using fMRI for lie detection, but that's not necessarily the same as interrogation (both agencies currently use polygraphs extensively on their own employees, a case where the subject would be cooperative), and they don't say the agencies have actually put that idea into practice. Your third link cites the first one and does suggest that using fMRI for interrogation is possible, but doesn't say whether anyone has done it. Your fourth link does not mention the government or interrogation. The other two links are not about fMRI.

So we're left with the anecdotal hearsay, which might well be true, but even if true, doesn't really show that fMRI was effective against prisoners beyond its use as a psychological trick.

BickNowstrom 1492 days ago [-]
I feel you are splitting hairs here. Lie detection is necessarily a part of interrogation. The references say they can do this, and have been interested in doing it, so why wouldn't they be using it? The nature of intelligence agencies demands a certain degree of secrecy/anonymity and conjecture. One author of the fourth paper, though never directly researching military or government applications, moved from the U.S. to Canada in large part due to the appropriation and funding from the military for his research.

If you allow an argument from authority of the practical use in counter terrorism interrogation, see the works of bioethicist Jonathan Marks https://scholar.google.com/citations?user=MpKuUlkAAAAJ&hl=en who in his 2007 paper cites "Correspondence between a[n anonymous] U.S. counterintelligence liaison officer and Jean Maria Arrigo" (2002-2005) https://en.wikipedia.org/wiki/Jean_Maria_Arrigo :

> Brain scan by MRI/CAT scan with contrast along with EEG tests by doctors now used to screen terrorists like I suggested a long time back. Massive brain electrical activity if key words are spoken during scans. The use of the word SEMTEX provided massive brain disturbance. Process developed by NeuroPsychologists at London’s University College and Mossad. Great results. That way we only apply intensive interrogation techniques to the ones that show reactions to key words given both in English and in their own language.

[Military interrogation takes two forms, Tactical Questioning or Detailed Interviewing. Tactical Questioning is the initial screening of detainees, Detailed Interviewing is the more advanced questioning of subjects.]

Note that I did not even make the stronger claim of decades of applied usage -- that you are making me defend and invalidate my references with -- just that it was decades underway. But above quote should satisfy even that.

mattkrause 1491 days ago [-]
On purely technical grounds, I'm a little suspicious of that quote.

CT scans don't tell you anything about brain function (they're structural), nor do the sort of MRIs that do tell you about brain function tend not to use contrast agents. People have used iron oxide to measure changes in cerebral blood volume, but it swamps the BOLD signal that's usually used to read out task-related activity.

On the other hand, I can imagine that you could figure out if a non-cooperative subject knew the word "SEMTEX" was actually a word with an oddball paradigm. Not sure how much that really helps but...

Also, the source you're quoting actually seems decidedly skeptical about whether any of this works. Here's Mark's paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1005479

BickNowstrom 1490 days ago [-]
Heh, this conversation moved the goalpost so far that it ended up at the beginning. A needless detour. You are right about being skeptical if any of this decade underway technology actually works and this causing ethical problems. Good day.
comex 1492 days ago [-]
> The references say they can do this, and have been interested in doing it, so why wouldn't they be using it?

I'm not trying to split hairs. It is possible that, despite the interest, the technology isn't at a state where it would actually be useful in practice. However, your new citation is stronger.

BickNowstrom 1491 days ago [-]
The errors and bias in these systems add to the ethical concerns. While it is arguably a good thing to substitute torture with brain image interrogation, there is a risk of putting too much trust in these systems, subjecting innocents to days of leading investigation, just because the computer said there is something to be found there.

Israeli airport security (arguably the best in the world) deploys derivatives of these systems, that look at micro-gestures, elevation of heart rates, pupil dilation, and temperature changes, to see if passengers respond with familiarity to terrorist imagery flashed on a screen as they walk by it. If that already works in practice, imagine the same, but being strapped with hundreds of sensors.

See also the 2010 research on image reconstruction from brain activity, and extrapolate that 10 years in the future and applied to military interrogation: https://www.youtube.com/watch?v=nsjDnYxJ0bo

serf 1491 days ago [-]
> It is possible that, despite the interest, the technology isn't at a state where it would actually be useful in practice.

Well, given that unreliable interrogation techniques are pretty commonly used historically, maybe it's more of a value thing.

i.e. water-boarding is unreliable and cheap, fMRI is unreliable and expensive.

Ajedi32 1492 days ago [-]
I would consider those to be ethical problems with specific applications of brain-machine interface technology, not with the implantation process itself.
jawns 1492 days ago [-]
> If getting a brain implant were as cheap, easy, and safe as getting a tattoo, the ethical problem would be largely solved.

But how realistic is that?

Your brain is a vital organ. It's encased in a hard skull. There is very little margin for error.

It just doesn't strike me as the sort of procedure that could ever be made as cheap, easy, and safe as getting a tattoo -- at least not in our lifetime.

Ajedi32 1492 days ago [-]
I intentionally picked an extreme example to illustrate my point, which is that most of the ethical concerns with giving people brain implants are function of the drawbacks of currently available implantation technology; not fundamental issues with the concept itself.

Obviously we won't be getting things quite to the level of cost and safety as tattoos anytime in the near future. Even Elon Musk's goal with Neuralink is somewhat less ambitious; he only wants it to be as safe and convenient as LASIK.

akanet 1492 days ago [-]
Neuralink is even more speculative - they want to embed like a thousand tiny metal filaments deep into your brain.
2008guy 1492 days ago [-]
How do you place electrodes without metal filaments? It’s been validated in monkeys.
mattkrause 1492 days ago [-]
Is that true?

There was a throwaway comment about that at a press conference and lots of rumors (positive and negative), but the white paper only mentions “monkey” once—-and in a reference to another group’s paper.

2008guy 1492 days ago [-]
Elon musk said on stage that they had put it in a monkey and had the monkey move a cursor with it. I’ve been following musk very closely since 2010 and he’s never blatantly lied so I think this will not be the first time. He has made errors about timelines and etc but never, ever blatantly lied about something so concrete. And it’s plausible to boot.
mattkrause 1492 days ago [-]
Not to swing my credentials around, but I actually record neural activity, from monkey brains, for a living.

It can be unexpectedly hard in so many different ways. The dura covering the monkey brain is much tougher, the brain itself is larger, more convoluted, and moves more, even just from breathing and heartbeats). The animals have busy, clever little fingers, so the interface itself needs to be mechanically robust and durable because these implants need to last for years.

I certainly want this to be true: with the exception of neuropixels, electrode technology has been depressingly stagnant. On the other hand, I need to see data before I get too excited and if I did have it, I'd be shouting it from the rooftops.

erohead 1492 days ago [-]
> never blatantly lied.

Funding secured

morelisp 1492 days ago [-]
This is merely a mistatement of fact. Lying is what the poors do!
2008guy 1492 days ago [-]
It wasn’t a lie...
bognition 1492 days ago [-]
you need something that conducts electricity. There are groups that use silicon implants.
bognition 1492 days ago [-]
This isn't new at all. Groups have been doing this for decades. Neuralink wants to mass produce the devices.
mattkrause 1492 days ago [-]
People have been recording brain activity for decades, but the scale is very different. Until about 10-15 years ago, most labs were using a handful of electrodes (my PhD work, for example, was done with 1-2 at a time). With cheaper amplifiers, it's become more common to use arrays with ~32-96 electrodes and they're often implanted chronically in the brain.

Neuropixels are fairly new and offer ~300 channels (selectable from ~1000). The neuralink thing would increase this another 10-fold.

duckface 1492 days ago [-]
I doubt this very much.

Sparse signal reconstruction is a massive and very possible thing to do using IIRC various forms of FFT.

I think this has already been done, and probably consumer devices will be using sparsity to reconstruct cortical signals with sufficient detail for this.

lars 1492 days ago [-]
This is cool. For those who are not super familiar with language processing, I think it's good to point out the limitations of what's been done here though. They mention that professional speech transcription has word error rate around 5%, and that their method gets a WER of 3%. Sure, but the big distinction is that speech transcription must operate on an infinite number of sentences, even sentences that have never been said before. This method only has to distinguish between 30-50 sentences, and the same sentences must exist at least twice in the training set and once in the test set. Decoding word-by-word is really a roundabout way of doing a 50-way classification here.

It's an invasive technique, so they need electrodes on a human cortex. This means data collection is costly, so their operating in very low data regime compared to most other seq2seq applications. It seems theoretically possible that this could operate on Google translate level accuracy if the sentence dataset was terrabyte sized rather than kilobyte sized. That dataset size seems very unlikely to be collected any time soon, so we'll need massive leaps in data efficiency in machine learning for something like this to reach that level. They explore transfer learning for this, which is nice to see. Subject-independent modelling is almost certainly a requirement to achieve significant leaps in accuracy for methods like this.

kasmura 1492 days ago [-]
Is the following quote at odds with what you are saying about 50-way classification?

"On the other hand, the network is not merely classifying sentences, since performance is improved by augmenting the training set even with sentences not contained in the testing set (Fig. 3a,b). This result is critical: it implies that the network has learned to identify words, not just sentences, from ECoG data, and therefore that generalization to decoding of novel sentences is possible."

lars 1492 days ago [-]
The difficulty of the problem is that of a 50-way classification. If the only goal was to minimize WER, a simple post-processing step choosing the nearest sentence in the training set could easily bring the WER down further. They've chosen to do it the way they did it presumably to show that it can be done that way, and I don't fault them for it.

They claim that word-by-word decoding implies that the network has learned to identify words. This may well be true, but it isn't possible to claim that from their result. For example, let's say you average all electrode samples over the relevant timespan, transform that representation with a FFW neural net, and feed that into the an RNN decoder. It would still predict word-by-word, on a representation that necessarily does not distinguish between words (because the time dimension has been averaged over). Such a model can still output words in the right order, just from the statistics of the training sentences being baked into the decoder RNN.

hrgiger 1492 days ago [-]
I have tried something similar 5 years ago using the meditation device from choosemuse.com. It was the cheapest option and provided hackable interface that you had access all the data. Then I wrote a small mobile app connects to headset.

Application was picking and showing a single random word from "hello world my name is hrgiger" then showing a greeen light, when I see green light, i think about the word and blink, headset was able to detect blinks as well so app was creating training data using blink time - xxx millis. So I created few thousands training data with 6 class using this and trained with my half-ass nn implementation and used generated weights to predict same way via mobile. Never achieved higher than 40%, tried all mixed waves, raw data, different windows of time series. Yet still it was a fun project to mess with, still I try to tune this nn implementation. If they achieve a practical solution I would use subtitles for the full length training. Simple netflix browser plugin might do the trick,but I am not sure if there will be a single AI algo that would understand everyones different data.

linschn 1492 days ago [-]
40% over 6 classes is way above a random baseline. This is actually pretty cool. Congratulations!
hrgiger 1492 days ago [-]
Oh thank you! I wasnt even aware those numbers were promising.
andai 1492 days ago [-]
The Muse headband looks like it covers a pretty small area of the head right? Other products in a similar price range cover more or less the whole scalp.
hrgiger 1492 days ago [-]
Correct, if I recall correctly it has 8 sensors, 4 forehead and 4 ear side. Previous year I wanted to try again but seemed like they drop linux support and asked to reach them via mail to have sdk, so I didnt bother. If i would go now I would go with openbci project I guess
carapace 1492 days ago [-]
I've got a muse on my desk right now, it works-for-me on Ubuntu 16.04 with a BLE dongle and the BGAPIbackend of pygatt.

FWIW, the main advantage of the muse design is that by going across the forehead it avoids interference from your hair, if I understand correctly.

The NeuroTechX ppl are working to get some demo notebooks working with muse. https://github.com/NeuroTechX/eeg-notebooks https://neurotechx.com/

hrgiger 1492 days ago [-]
Thanks for the info, considering situation maybe I can give a shot again:) Yes I also remember reading that sensors had pretty much identical results with emotiv as well. This project also looks pretty cool , thanks for sharing.
carapace 1492 days ago [-]
Cheers!
leggomylibro 1492 days ago [-]
It looks cool, but they trained their models on people reading printed sentences out loud.

Would that actually translate to decoding the process of turning abstract thoughts into words?

The researchers also note that their models are vulnerable to over-fitting because of the paucity of training data, and they only used a 250-word vocabulary. Neuralink also has a strong commercial incentive to inflate the results, so I'm not too sure about this.

It's great to see progress in these areas, but it seems that technologies like eye-tracking and P300 spellers are probably going to be more reliable and less invasive for quite some time.

hyyggnj 1492 days ago [-]
The speaking aloud is very suspicious. Why do subjects need to speak aloud? Are they actually decoding neural signals or just picking up artifacts introduced by the physical act of speaking (i.e. electrodes vibrating due to sound, etc)?
weinzierl 1492 days ago [-]
Fascinating work but far from what some might hope from reading only the title.

The translation is restricted to a vocabulary of 30 to 50 unique sentences.

zo1 1492 days ago [-]
They do mention that the network is partially learning the words themselves:

> "On the other hand, the network is not merely classifying sentences, since performance is improved by augmenting the training set even with sentences not contained in the testing set (Fig. 3a,b). This result is critical: it implies that the network has learned to identify words, not just sentences, from ECoG data, and therefore that generalization to decoding of novel sentences is possible."

warnhardcode 1492 days ago [-]
More than enough to control window focus on my computer and such. I'd be happy to have a system that responded to a few hundred thoughts: "Left desktop", "Right desktop", "Last focused window", "Lock screen", "What time is it?"
zo1 1492 days ago [-]
Can we remove the tracking query-string from this link, please? It works fine without it:

https://www.nature.com/articles/s41593-020-0608-8.epdf

Edit. Sorry, seems to only show the first page if you remove the token.

IHLayman 1492 days ago [-]
Not only that, if you don't run the trackjs script, the pdf won't load at all. Sorry but a hard pass from me. Don't track my reading.
imglorp 1492 days ago [-]
Good idea.

I wonder if the site could run a URL cleaner on all link submissions?

@dang?

briga 1492 days ago [-]
It seems like this field is at about the same stage of progress as image recognition was in the 90s when researchers were trying to getting a handle on MNIST-type tasks.

I wonder how much the language embeddings learned by the transformer are reflected in the actual physical structure of the brain? Could it be that the transformer is making the same sort of representations as those in the brain, or is it learning entirely new representations? My guess is that it's doing something quite different from what the brain is doing, although I wouldn't rule out some sort of convergence. Either way, this is a fascinating branch of research both for AI and the cognitive sciences.

1492 days ago [-]
h3ctic 1492 days ago [-]
Looks like a good approach and the error rate of 3% is really good, I guess. Did they mention how they got the input data? I couldn't find it.
zo1 1492 days ago [-]
They use 250 ECG electrodes as input. I think that means it's above the skin, so not invasive.
resiros 1492 days ago [-]
You are mistaken. They use ECoG which is intracranial electroencephalography. These are electrodes placed on exposed surface of the brain. https://en.wikipedia.org/wiki/Electrocorticography
zo1 1492 days ago [-]
Thanks for the info!
raidicy 1492 days ago [-]
I dearly hope this is the case. Even if it were expensive to rig up, being able to not use my hands for computer work would really help( I have RSI).
carapace 1492 days ago [-]
I'm pretty sure you can get that with a HD camera or two and some hypnosis plus off-the-shelf ML.

One of the very first things I learned when I was studying hypnosis was to induce a simple binary signal from the unconscious. (Technically it's trinary: {y,n,mu} https://en.wikipedia.org/wiki/Mu_(negative) )

(In my case my right arm would twitch for "yes", left for "no", no twitch for "mu" (I don't want to go on a long tangent about all the various shades of meaning there, suffice it to say it's a form of "does not compute."))

Anyway, it would be trivial to set up one or more binary signals, and detect them via switches or, these days, HD cameras and ML. You could train your computer to "read" your mind from very small muscular contractions/relaxations of your face. (The primary output channel, even before voice, of the brain, eh?)

Or you could just set up a nine-bit parallel port (1 byte + clock) and hypnotize yourself to emit ASCII or UTF_8 directly. That would be much much simpler because it's so much easier and faster to write mind software than computer software (once you know how.) And you could plug yourself into any USB port and come up as a HID (mouse & keyboard.)

I'll say it again: when you connect a brain to a computer the more sophisticated information processing unit is the point of greatest leverage. Trying to get the computer to do the work is like attaching horses to the front of your truck to tow it. Put the horses in the back and let the engine tow them.

astrea 1492 days ago [-]
Could you elaborate a little bit on the 'induce a simple binary signal from the unconscious'? That sounds fascinating.
carapace 1492 days ago [-]
Sure. (Thanks for asking.) It was one of the first things I learned when I started studying (self-)hypnosis. It's a simple way to have access to the unconsciousness without going into a trance.

There's really nothing to it. You induce a light trance and ask the unconscious mind to create a simple unambiguous yes-no signal. Finger motions are common. After that you can ask yourself questions and get y/n answers (or non-response, what I'm calling "mu", which indicates some issue with the phrasing or nature of the query.)

I should mention that you should be very careful about your self-model if you are experimenting with piercing the barrier between the conscious and unconscious minds. In computer terms, this signal corresponds to a kind of trans-mechanical oracle and having it available to your (metaphorical) Turing machine mind makes you into a fundamentally different kind of processor, operating by rules that may be unfamiliar.

https://en.wikipedia.org/wiki/Oracle_machine

But see also: https://en.wikipedia.org/wiki/Oracle because that's more accurate.

Ritsuko_akagi 1492 days ago [-]
Speech From Brain Signals https://youtu.be/YHFx6O5x5Hw (2019)
tighter_wires 1492 days ago [-]
Man, what is going on with Participant C's active neurons.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:11:16 GMT+0000 (Coordinated Universal Time) with Vercel.