Geoffrey Hinton– a notable technology researcher, computer scientist and cognitive psychologist, has announced his resignation from his role in Google amid AI(Artificial Intelligence) concerns. Mr. Hinton is a pioneer neural networks researcher, a feat that has earned him the term “AI Godfather” in the tech community. While announcing his resignation, he commented that he is now 75 and it is time for him to retire from the industry. He however, expressed his regret over his work in the AI sector.
In his statement to reporters, Mr. Hinton claims that AI chatbots were quite scary. He also went further to indicate that as of now, these chatbots are not intelligent than humans but in the near future, they will. He also noted that his resignation does not end his involvement in the sector as he intends to educate the public and warn them about the danger of unregulated AI and its possible misuse by bad actors.
It is pertinent to mention that his pioneering research in neural networks and deep learning paved way for some of the complex and still evolving AI models we see today, including ChatGPT. Neural networks play a very critical role in AI in that they mimic the human brain’s neurons and neural network in the way in which they perceive and process data. Simply put, neural networks are arranged in several stages that take input from the first stage, aggregate the variables in a dataset and pass the output over to the next layer. This output then becomes the input on the next layer which carries out more computation and does the same to the following layer and in the end generate output, but this is not always the case.
According to Mr. Hinton, AI models will soon surpass the human level of intelligence as evident in the GPT-4 which eclipses a person in the amount of knowledge. Though the model is not as good as a person, its trajectory could soon achieve this and according to him, we should be worried when it comes to pass.
In his statement, he noted that bad actors could decide to give this models(robots) the ability to create their own sub-goals. Amid this unregulated delegation of duties, the models might decide to set a sub-goal of for instance “I need more power”. What will happen after this is utterly unpredictable but one thing is certain, if it is applied to automate weapon systems, then a disaster is looming on the horizon and neither of us, good or bad, is safe.
Mr. Hinton is not the only one who has sounded an alarm of this sector. Some giants in the tech arena have also rebuked the rate at which AI is accelerating. One of them is tech billionaire Elon Musk, who among others has signed an open letter co-signed by other players in AI to call for a pause in all advancements of AI models. Mr. Musk has also joined the AI race with his idea, TruthGPT, that is expected to employ transparency and best practices.
Though Mr. Hinton has indicated that the technology could deliver more benefits than risks and so, its development can continue but in a highly regulated environment where governments enforce policies on its development by scientists. However, international competition will make halting this development a difficult task. It doesn’t take a genius to put it together that if the US halts its development, then China could get ahead and honestly speaking, these world superpowers are in a race to assert dominance. It could be the new cold war being re-ignited but instead of taking place between 2 major powers, it will spill over to many nations.
While making his exit from Google, Hinton claims that he is not soiling the reputation of the tech giant but rather wants good things for the firm. Google’s Chief Scientist, Jeff Dean, has on his part said in a statement that the company is committed to a responsible approach on AI.
Author’s Sentiments
It is evident that governments are turning a blind eye to these warnings as none has taken any affirmative action. By each passing day, the AI sector gets flooded by new entrants looking to join the race and make huge profits while at it. The fact that most developers are focused on profits means that they ignore safety concerns and under the guise of solving humanity’s problems with AI, these errors flow under the radar unchecked, a mistake that could spell disaster in the future. The fact that the rate at which this models are advancing has scared its developers is no easy issue.
On my part, I believe that the strict adherence to a standard code of practice that is acceptable globally could solve the safety concerns. The problem is, that standard does not exist and trying to make one in the current political environment could replicate what has happened to the so called “nuclear deals and agreements” that have fallen apart over time. Though AI can be used to increase human efficiency, it should not replace humanity as you can only have a single copy of a person and multiple copies of an AI model. As the world residents navigate through this quagmire, my take is that we should either adapt or …
Related Posts
Scale AI Secures $1 Billion in Funding, Reaches $14 Billion Valuation
Scale AI announced on Tuesday that it has secured $1 billion in a late-stage funding round. This round…
Microsoft’s Landmark Investment in G42: Propelling AI Innovation in the UAE and Beyond
G42, a leading UAE-based AI and cloud computing company, made headlines this week with Microsoft’s substantial $1.5 billion…
Legaltech AI Startup Leya Secures $10.5M Seed Funding Led by Benchmark
Leya, a Swedish startup specializing in AI-driven legal technology, has raised $10.5 million in a seed funding round…
Auradine raises $81 Million in a Series A funding round.
Auradine is a firm working to achieve a breakthrough in scalability, sustainability and security solutions for the future…