Thursday, February 1, 2024

Elon Musk’s Neuralink Startup Implants AI Chip in First Human Brain

On Sunday, January 28th, Neuralink, Elon Musk’s neurotechnology brain chip company, successfully inserted a brain implant into its first human skull.


Launched in 2016, Neuralink was founded to take advantage of the rapidly growing AI industry by creating Brain-Computer Interfaces (BCIs) that interpret a person’s neural activity and translates it into decipherable actions. The chip, about the size of a quarter, is surgically implanted into the head by a microscopically precise robot (resembling a sewing machine) that drills through the skull and connects it to the brain with threads that transport the data between the chip and brain. It is designed to help people with paralysis, effectively allowing patients to control devices, such as a smartphone or computer, using just their minds. It’s quite literally the Force. Musk wrote on X, “Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That’s the goal” (X).



Of course, with this new, exciting technology on the rise comes immense risks. Though Neuralink’s chip was recently approved by the FDA, it received major backlash for “unnecessarily invasive, potentially dangerous approach to the implants that can damage the brain” (Vox), and apparently it has in animal test subjects, all to supposedly further Musk's vision for the project of merging human and AI. Furthermore, Musk’s invention also calls into question the ethics of such a project. Who should have access to brain-computer interfaces? How should they be regulated? Should there be applications restricted for certain people? If what Musk says is true, that Neuralink will “get humans on a path to symbiosis with artificial intelligence” (Washington Post), it is almost unquestionable that the government must step in to create limits and regulations for these types of never-before-seen circumstances.



I believe that there will be a point in the near future where artificial intelligence (and specifically BCIs) becomes such a big problem that the government will be forced to devise new legislation surrounding the use of the technology. Connecting this to what we have been talking about in class, I would not be surprised if, in the next few decades, the US creates a new department in the Cabinet devoted to dealing with AI policy. The most recent department added to the President’s Cabinet was the Department of Homeland Security, formed relatively shortly after the 9/11 attacks. Thus, it may only take one dangerous AI breakthrough incident to give the government a reason to build a new AI department. The scary thing about AI is that the possibilities are virtually endless—scientists fear the hypothetical day where AI becomes smarter than the human brain, eventually leading to large-scale AI world domination. I think, just maybe, this might warrant a small Cabinet department.


With all this being said, it’s indisputable that Neuralink and similar technologies are making insane medical advancements. I will end this article by pushing forth the idea that BCIs are a net positive for our communities. With limits in place and methods for controlling AI, which I am sure we will come to when the time is right, there’s so much potential in this budding industry, and I’m excited to see where it will lead us in the future.



Sources:


 

7 comments:

Dayrin Camey said...

Seeing how technology, how AI has developed over the last few years is truly amazing but very terrifying at the same time. I agree with Maya that sometime in the future the governemnt will need to create a cabinet to regulate AI policy. Finding ways to better the human race is always a positive thing but when dealing with the brain and technology there is always risks that comes with it. The brain literally controls us, its our own personal remote control. Inserting AI technology into that creates a risk of having something else in control.The first thing that came to mind when I read this post was the movie I, Robot. The movie takes place in year 2035, a world full of robots, robots designed to assist humans with anything they need. They are programmed and restricted  by three rules and VIKI the supercomputer the one that controls all the robots learns everything about humans, and develops her on will so to speak. She turns every robot against the humans to protect them becuase she believes that humans are very violent. I believe that this fictional movie highlights one of the major dangers that AI brings into the world. Regulations should be implemented to make this development as safe as possible for us humans. A computer will only become powerful and dangerous if it can think on its own.

Luke Phillips said...
This comment has been removed by the author.
Luke Phillips said...

I find it very interesting how you brought up the point of a whole new department or agency being created to tackle AI, which I previously had not thought of, but it now seems like an inevitable conclusion as AI seems to be now growing on a more rapid pace than ever. However, as you said, there is no doubt that AI has its numerous positives, and it seems that with proper regulations, like all other advancements, (even if said regulation will seem nearly impossible,) AI could act as a force to help hopefully ALL of humanity for the better. Lastly, it is a little bit worrying that all quickly developing AIs seem to have largely private interest behind them, such as Elon Musk, as the reality is that without government intervention, whoever owns these AIs will likely take actions in the name of solely profit. Ultimately, we will just have to wait and see what comes of these AI advancements.

Chris L said...

Wow! This seems like something I would read about in a science-fiction novel, and it's amazing to think about the types of tech developments we will see in our lifetime. However, I think many people will question its safety, and the government will probably need to oversee and regulate this area

Zachary Schanker said...

I think the thing that we all need to be thinking about is regulations. Regulations will be the most incredibly important yet polarizing topic as neuralink continues to grow. First off is the question of government regulation, and how much control the government should have over a person’s neural activity. If the government is given too much control, it may be seen as despotic, especially in a country where we value freedom so much. If the wrong person were given control of such an institution that held that power, it could become very horrible very quickly. Second is the regulations imposed on the company itself – will the company be able to see what you are thinking? While this may help them further the technology, it would obviously raise privacy and security concerns, especially since the neuralink is observing brain activity at all times. I think that before neuralink moves forward in a significant way, these questions need to be addressed.

Sarah Hu said...

I think it's possible to have a department that regulates AI, given its widespread use across various industries and its ability to address significant challenges. If there were a Department of AI, there must be substantial funding, introduced additional administrative layers, and technical complexities. Moreover, if a Department of AI were to be established, it would likely be decentralized, with multiple sites and professionals tasked with regulating and shaping AI policy. However, the introduction of AI in some institutions, such as schools, is still controversial, and differing opinions due to its complexities and possible chance to mislead people's opinions. The diverse opinions that need to be considered include challenges to the AI regulatory framework.

Ava Murphy said...

Anything that regards information being sent to one’s brain, or technology and humanity working as one is certainly frightening. However it seems neurolink regards connecting communication from the brain to technology, and not from technology to brain, or at least not in a way where the tech would control the human for that matter. That being said, skepticism is natural and frightening among early bionic-brain technology, and I believe Maya asks all the right questions regarding protecting our citizens from any technological control or abuse, like “Who should have access to brain-computer interfaces? How should they be regulated?”. The government should be the first to thoroughly regulate and examine these technologies' potential to protect our citizens, consequently, the government should be open to their processes of regulation and be held accountable should they adopt this technology into any of their practices. I agree that another branch of government will eventually be made, like the creation of the DHS, regarding AI and creative technology.