Whether it’s a self-checkout line, the face-ID on your phone, or even a quick ChatGPT request to help with chemistry homework, Artificial Intelligence (AI) is used everywhere every day. In fact, most Americans can’t last a day without using it.
Artificial intelligence’s widespread use combined with its addictive component make it a deadly new competitor in society. The tempting quality of AI is understandable, and its vast versatility makes it easy to use for a multitude of purposes.
One of the most renowned forms of AI is ChatGPT, which is used across the globe to aid in education, work, writing, and more. Another is the notable Apple assistant Siri, which has aided customers for over a decade. Even though these chatbots are some of the more common forms of AI, they’re simply just one in a variety of impressive types.
AI comes in numerous forms, ranging from navigation, assistance, banking, to becoming its very own functioning robot. It seems as if AI can do anything. Yet this is only possible through years of study and development.
According to the University of Washington, the gates to artificial intelligence were opened by British mathematician and computer scientist Alan Turing. In 1950, Turing published his work “Computing Machinery and Intelligence”. His paper proposed the question, “Can machines think?”, along with a hypothetical experiment called the Turing Test.
The Turing Test consists of two subjects, one human and one machine, along with a third party evaluator. The gist of the test is that the third party must determine which subject is human and which is machine by asking both a series of questions. If the evaluator cannot determine which is which, then the test hypothetically confirms a computer can mimic human qualities.
Six years later, the term “Artificial Intelligence” was officially introduced by American computer scientist John McCarthy. McCarthy organized a summer conference at Dartmouth University, gathering a small group of top-notch scientists and mathematicians to explore the idea of machine-simulated intelligence.
This conference would open the field of artificial intelligence and begin a period of inquiry and enlightenment, which is sometimes referred to as AI’s Golden Age.
However, AI research hit a snag in the 1970’s during the era dubbed “AI Winter”. Reports and papers addressing flaws in artificial intelligence caused defunding in research, leading to a lack of interest and study. Due to this, AI exploration came screeching to a halt for nearly a decade.
The invention of expert systems, programs that utilize facts to produce inferences, led to a brief buzz of AI research. Unfortunately, this buzz was short-lived. Extreme expectations in artificial intelligence and lack of investment created another AI Winter, which took place from the early 1980’s all the way until the mid 1990’s.
Enduring through these winters, artificial intelligence took flight in the late 1990’s as people began to see AI potential. Innovations in complex computer programs and AI capabilities sparked an interest in the field not seen in almost two decades.
Improved algorithms combined with larger data storage created cost-effective and useful computer programs, capable of making unbiased decisions and completing work more efficiently than most humans.
Along with this newfound popularity, funding and investments in artificial intelligence reached an all-time high, further contributing to AI advancements.
Yet by pouring mass support into AI, the fate of mankind became sealed for good.
The recent boom of artificial intelligence has spurred worry amongst experts. Recent evidence has suggested that, despite its undeniable benefits, AI has contributed to the depletion and scarcity of earth’s natural resources.
While it may seem strange a computer program can harm our planet, these artificial programs are held in large data centres that inhale resources like candy.
According to the UN Environment Programme, global AI-related infrastructure will soon intake up to six times more water than the entire nation of Denmark. This rapid consumption, combined with already depleting resources, will only lead to global freshwater scarcity.
Additionally, water isn’t the only resource these AI data centres are affecting.
The average ChatGPT request requires nearly ten times more energy than a Google search, and with over 10 million ChatGPT queries per day, the AI program utilizes as much energy as 180,000 U.S households. Moreover, the International Energy Agency estimates that by 2026 these large data centres will consume up to 4% of global energy, roughly equivalent to the energy usage of Japan.
Artificial intelligence also uses heavy minerals and metals in order to power and transport hardware devices. The Yale School of the Environment states AI hardware uses metals such as cobalt, silicon, gold, and more. On top of that, mining and producing these metals can create pollution and soil erosion.
Although it only truly took off thirty-some years ago, AI has soared higher and farther than anyone would’ve thought achievable. Its widespread operation will only expand in coming years, chiefly due to potential innovations and models.
While AI can provide countless advantages, its global popularity and investments leaves a knot too twisted to untangle. For good or bad, the day AI became a reality, a door to turn back was unknowingly shut for good.