Science

One small step too far into sci-fi?

Artificial intelligence has consistently cropped up in science fiction for decades. As science continues to advance, we are getting closer to making machines with human-like intelligence a reality. The ability of AI to mimic the way the human brain works, to learn from examples, recognise objects and make decisions allows these machines to perform tasks that could previously only be carried out by humans. 

‘Weak’ AI is now part of our everyday lives, through predictive text, speech recognition technology such as Amazon’s Alexa and ‘smart’ recommendations on platforms like Spotify and Netflix – they’re built to perform specific tasks, and so mimic only part of the human-mind. ‘Strong’ AI would be a more accurate replication of the autonomy of the human brain and would be able to apply its intelligence to any problem, much like a human. Currently this is a purely theoretical concept, only existent in the realm of sci-fi, such as HAL from 2001: A Space Odyssey – a sentient AI computer controlling a spacecraft, which eventually malfunctions with fatal consequences for the crew. 

Recently,  increased investment in AI research has seen the development of machine learning, by using many computers to mimic the layout of the brain’s neural network, allowing a machine to learn and reprogramming itself as it digests new data thus enabling it to carry out tasks to a higher accuracy. Deep learning is a subset of machine learning, based on deep neural networks with multiple hidden layers of connections which can refine the previous layer and thus can be carried out without human intervention. 

One of the main concerns with the implementation of AI is job automation. AI is a perfect match for repetitive, predictable tasks, so could easily be rolled out and replace the human workers in a multitude of sectors, causing mass displacement of the workforce. If found in the wrong hands, AI could also be easily utilised for criminal activities: hacking, automated scam calls, the implementation of ‘deepfakes’ to falsely incriminate influential people. 

Autonomous weapons are another massive concern – what if AI decided to launch nuclear weapons? They would be an ideal culprit to put behind assassinations, genocidal violence, and the destabilisation of nations, as it would take the responsibility away from actual people. Yet another issue stems from inherent human biases, which will inevitably be passed on from the developers, majority of which are able-bodied, white, wealthy males, to the AI and would have the potential to increase socio-economic inequalities. 

However, it’s not all bad. Futurist Martin Ford, who focuses on AI, has dubbed it “the most important tool in our toolbox for solving the biggest challenges we face”. The integration of AI into the data-heavy healthcare industry has already proven to have a positive effect – around 90% of healthcare data comes from medical imaging, and the vast majority of this is unanalysed and would be an ideal task for AI. This was shown through a collaboration with Google’s DeepMind and London’s Moorfields Eye Hospital, where AI was trained to diagnose ocular conditions from digital retinal scans and was able to select the correct referral decision for over 50 eye diseases with 94% accuracy, rivalling that of top medical experts. 

An image-based dementia screening test has also been developed by Cognetivity, using AI to help differentiate between the image processing ability of patients with and without dementia – a notoriously difficult to diagnose illness. AI technology could also prove invaluable when it comes to tackling climate change, facilitating better climate predictions, working out where carbon emissions are coming from and predicting extreme weather. 

It is clear there are definite positives to widespread development of AI, but there are also huge risks associated with it, as anticipated by great scientific mind Stephen Hawking, “Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization.” Whether this preparation is through international treaties, or regulatory bodies, remains to be seen, but seems to be an important precursor to further advances in artificial intelligence.


Follow Concrete on Twitter to stay up to date



Like Concrete on Facebook to stay up to date



Follow Concrete on Instagram to stay up to date


13/04/2021

About Author

Rosina Poller



Notice: Trying to access array offset on value of type null in /home/wp_35pmrq/concrete-online.co.uk/wp-content/themes/citynews/tpl/tpl-related-posts.php on line 11

Notice: Trying to access array offset on value of type null in /home/wp_35pmrq/concrete-online.co.uk/wp-content/themes/citynews/tpl/tpl-related-posts.php on line 26
Calendar
September 2021
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  
Latest Comments
About Us

The University of East Anglia’s official student newspaper. Concrete is in print and online.

If you would like to get in touch, email the Editor on Concrete.Editor@uea.ac.uk. Follow us at @ConcreteUEA.