Why Recognizing Artificial Consciousness May Be More Difficult Than Creating It

IntroductionFor many, the image of a world based on Artificial Intelligence (AI) brings to mind robots roaming the streets in an apocalyptic, dystopian future where humans are hunted to near extinction. While such fiction makes for great (or sometimes great) filmmaking, the idea that AI has the potential to evolve beyond an accepted scientific, if not mechanistic, scope, is a very real concern even within the scientific community.For example, the famed astrophysicist and one of the greatest scientific minds of the modern age, Stephen Hawking, has claimed that AI would be capable of “taking off on its own and redesigning itself at an ever-increasing rate.” In the August 29, 2019 conversation between Elon Musk and Jack Ma, the former stated that we might expect AI to author (or alter) its own programming, spontaneously, at some point in the future.In this article, we will attempt to identify key characteristics most often associated with consciousness, sentience, and intelligence, to determine whether AI is capable of developing a mind of its own, or, indeed, if it already has.What Can We Consider To Be AI?If we defined AI by its ability to produce useful answers based on ingesting and processing information, then a simple, ...


Read More on Datafloq

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal