Artificial Intelligence in the Twenty-first Century: The Good, The Bad, and The Nuanced Need for Regulation Introduction
FEATUREDSCIENCE
Introduction
In our increasingly digital world, Artificial Intelligence (AI) remains a focal topic of the twenty-first century. The global focus on AI, notably following its surge in popularity during the 2020s due to Large Language Models (LLM) such as OpenAI’s ChatGPT, has caused a significant increase in both academic discourse and public interest. According to Stanford University’s 2023 Artificial Intelligence Index Report, the number of AI publications globally has “more than doubled, growing from 200,000 in 2010 to almost 500,000 in 2021” (Maslej et al., 24). Public awareness of AI is also escalating rapidly, with a 2023 McKinsey survey of over 1,500 respondents revealing that “[s]eventy-nine percent … had at least some exposure to gen AI” (Chui et al., 2).
High-profile organisations have also begun capitalising on the technology’s capabilities. For instance, Meta recently implemented its Meta AI assistant function in the newest Instagram update, marketed as a replacement for conventional search browsers. In the healthcare sector, companies such as Chicago-based firm QuantX are leveraging AI to improve the diagnostic accuracy of breast lesions. These statistics and real-life implementations illustrate the high expectations for AI to improve our lives. However, skepticism still lingers regarding its ethical implications, including but not limited to data security, algorithm bias, and job displacement. The rapid growth of AI developments currently poses a serious challenge for the majority of countries still struggling to establish regulatory systems that adequately address these issues. Yet, it is crucial to formulate some preliminary boundaries now, while AI is still in its infancy. With this in mind, this paper provides an overview of the advantages and disadvantages associated with AI, along with recommended steps for future AI regulation to ensure its deployment aligns with ethical standards.
The 1950s and Onwards: A Brief Timeline of AI
Before delving into these aspects, it is important to define what is AI and explore its timeline. In simple terms, AI refers to computer systems able to execute tasks that normally require human intelligence. Examples of such tasks include visual perception, decision-making and translating languages. Contrary to popular belief that AI is a recent creation, its origins can primarily be traced back to the 1950s with two key events. Firstly, English mathematician Alan Turing introduced the now famous “Turing test” for computer intelligence in his paper titled “Computing Machinery and Intelligence” in 1950. The test sought to determine whether computers could think, and posed the conclusion that a machine would be able to “think” if it fooled people into believing it was human. Though there is debate as to whether any computer has passed the test, Turing’s contributions are invaluable as he presented a practical way to answer the question of machine intelligence. The second event came in the form of a workshop organised by Professor John McCarthy at Dartmouth College in 1956. The meeting spanned several weeks and facilitated in-depth discussions among leading American and British computer scientists, with the aim of achieving a breakthrough in human-level intelligence for machines. Most notably, this workshop is often cited as the first time the term “Artificial Intelligence” was used to discuss this sector of computer engineering.
The AI field subsequently experienced a brief “winter” lull from the 1970s to 1980s due to factors such as limited computer processing power and funding cuts for research projects. However, it entered a “summer” period of rampant innovation in the 2010s and arguably remains in one till now. This renaissance can be attributed to three primary factors: the formation of more sophisticated algorithms, the rise of affordable graphic accelerator chips, and the increased accessibility of huge annotated databases (Collins et al., 2). Since then, the world has seen incredible AI developments that are helping improve preexisting systems and devices in our lives. One example is the advent of deep learning, a subdivision of machine learning where artificial neural networks modelled on a human brain’s networked neurons form the AI’s algorithm, enabling it to comprehend vast amounts of data. Many services and apps today are powered by deep learning algorithms, ranging from virtual assistants, chatbots, recommender systems, image colour tools, bank fraud detection and more. Shifting forward to the AI boom in the 2020s, the majority of mainstream media coverage on AI is focused on LLMs and Natural Language Processing (NLP) technologies, such as ChatGPT, along with Generative AI (GAI) applications like Midjourney. These tools have garnered public excitement due to their groundbreaking capabilities to engage in human-like conversations and generate stunning artwork, respectively. However, both have come under scrutiny for disrupting conventional understandings of intellectual property and copyright laws, as well as potentially causing a devaluation of certain skills in the job market. Despite these concerns, it is anticipated that corporations will increasingly utilise AI across key sectors such as law enforcement, science, and construction work by the 2030s (Marr).
AI at its Best and Worst: Influence on Education and the Environment
Discussions surrounding AI’s merits and drawbacks have already been thoroughly explored by academics and regular users. As such, this section highlights two aspects that are not as widely covered in mainstream media – AI’s positive influence on the education sector, and AI’s impact on our climate. By spotlighting these two topics, readers will acquire a more comprehensive understanding of AI’s multifaceted role in shaping our collective future.
In educational institutions, teachers often struggle to dedicate sufficient time to each student due to large class sizes, limited resources and more. Students in need of assistance consequently suffer the most from this, with the lack of personalised support potentially leading to a cycle of reduced motivation. Socioeconomic factors also play a key part in this as those from affluent backgrounds are more likely able to access external services such as tutoring to supplement their learning. Those with fewer resources are thus disproportionately impacted by the limitations of the conventional education system, ultimately perpetuating social inequality. However, the rise of EdTech startups utilising AI to create innovative products has the potential to address these disparities and elevate the educational experience for all. Personalised learning has long been a central focus of the industry, with numerous companies such as DreamBox Learning, Squirrel AI and LiteracyPlanet all creating AI-powered platforms catered to individual student needs using predictive analytics. For example, DreamBox Learning’s predictive analytics system delivers guided reading practice that improves fluency, comprehension and vocabulary knowledge for children in grades Pre-K to 12. LiteracyPlanet also delivers personalised learning paths to improve one’s English skills, but with an emphasis on gamification through immersive storytelling to encourage motivation. AI is also progressively being used to create tools that can support students with disabilities. For example, Microsoft’s Immersive Reader utilises text-to-speech and grammar highlighting to aid students with dyslexia and visual impairments in improving their reading comprehension. Similarly, Google’s Live Transcribe and Sound Amplifier applications are specifically tailored to the needs of the deaf and hard of hearing students by providing real-time transcription of classroom discussions, and improving audio clarity by filtering background noise.
Students themselves have also shown support for the integration of AI into their learning experiences, even when institutions are less receptive. Just last year, a group of disabled Australian students voiced their discontent over the decision of prominent Australian universities to take precautionary measures against ChatGPT due to fears of cheating. These students argued that doing so would hinder their capacity to “read course materials and take exams,” potentially setting them back in comparison to their nondisabled peers (The Japan Times). Despite the varying stances that educators and students hold regarding AI implementation, this growing body of evidence highlights AI’s potential to make learning more immersive and inclusive. As such, this necessitates collaborative efforts between institutions and students to thoughtfully integrate AI into educational settings and ensure that it contributes to the learning experience without sacrificing academic integrity. Overall, as the EdTech industry continues to grow in global market size, there is a remarkable opportunity to expand the reach and positive impact of AI solutions in education.
While the potential benefits of AI in education are evident, it is crucial to recognise the drawback of its environmental impact. Unlike more immediate concerns such as the disruption of conventional copyright laws, AI’s environmental impact is less apparent because it is not something we can literally “see”. Firstly, most users will likely never witness the extensive hardware needed to maintain AI tools as they are stored in data centres all over the world. To upkeep these systems, technology giants rely on the constant harvesting of raw minerals through cheap labour predominantly in the Global South. In Atlas of AI, researcher Kate Crawford aptly critiques the “myth of clean tech” and outlines the environmental costs of AI developments. For instance, she startlingly reveals that running one NLP model generates “more than 660,000 pounds of carbon dioxide,” which is equal to “125 round-flight trips from New York to Beijing” (Crawford 42). This is especially concerning when considering that the selling point of NLP tools like Siri or Alexa is their convenience. The seamless user experience is severely disconnected from the carbon footprint generated in the background, essentially masking the resource-intensive processes needed to power AI. Compounding this act of cloaking AI’s environmental impact is the type of language used to describe these tools. Crawford points out that metaphors such as “the cloud” are commonly used to discuss the technology, presenting AI development as “something floating and delicate within a natural, green industry” (41). Through the use of linguistics, such terms uphold the deceptive image of sustainability that many technology companies adopt, ultimately obscuring their growing carbon footprints.
There have been some regulatory attempts to mitigate AI’s environmental impact, but tangible progress remains elusive for now. In 2010, US legislation announced the Dodd-Frank Act which required companies mining rare earth elements to declare “where those minerals came from and whether the sale was funding armed militia” in the Republic of Congo (34). However, no fines or charges are imposed, and major technology firms like Intel and Apple have been critiqued for “assessing smelting plants outside of Congo”, instead of the mines (34). Though such legislation reaffirms the detrimental impact of AI on the environment, the loopholes and lack of enforcement has impeded effective mitigation of AI’s environmental damages. Other companies like Dell have also claimed that their supply chains are too complex to conduct an in-depth verification of the sources of their materials. Moreover, the trend of greenwashing tactics as exemplified by corporations such as Apple and Google who purchase carbon credits to offset emissions, further obscures the reality of their environmental footprints. Overall, this perpetuates a cycle where companies weaponise their financial resources and the lack of transparency in production processes to present their activities as environmentally responsible, despite evidence suggesting otherwise. Given the serious need to address climate change, it is imperative for more focus to be on understanding AI’s environmental impact in order to push for genuine sustainability.
What Can We Do Better?
As we navigate the dynamic landscape of AI innovation, it is crucial to embrace a proactive and collaborative approach to regulation. While AI is still in its infancy and experiencing rapid growth, imposing all-encompassing regulations may stifle the innovation necessary to enhance its potential. Instead, stakeholders must work together to craft balanced regulatory frameworks that foster innovation whilst addressing ethical concerns. One way to do so is by establishing an annual roundtable discussion between leading AI companies, country leaders, policymakers, academic researchers and more. This forum will enable for a diverse range of perspectives to converge, and the discussion outcomes can be shared with the public via comprehensive reports. These papers will provide relevant insights that can shape preliminary regulation proposals, and also facilitate regular assessments of AI’s capabilities. Ultimately, rather than fruitlessly avoiding AI like some educational institutions, our focus should be on exploring ways to ethically harness its power to shape a brighter future for all in our digital world.
Works Cited
Crawford, Kate. “Earth.” The Atlas of AI, Yale University Press, 2021, pp. 24–51.
Collins, Christopher et al. “Artificial intelligence in information systems research: A systematic literature review and research agenda.” International Journal of Information Management, vol. 60, 2021, pp. 1–17. ScienceDirect, https://www.sciencedirect.com/science/article/pii/S0268401221000761?via%3Dihub.
Chui, Michael, et al. The state of AI in 2023: Generative AI’s breakout year. McKinsey & Company, 2023.
Marrs, Bernard. “The Biggest AI Trends In The Next 10 Years.” Forbes, https://www.forbes.com/sites/bernardmarr/2024/02/19/the-biggest-ai-trends-in-the-10-years/?sh=6b7df402f8b2. Accessed 29 May 2024.
Nestor Maslej, et al. The AI Index 2023 Annual Report. Stanford University, 2023.
Starcevic, Seb. “As Australian colleges crack down on ChatGPT, disabled students defend AI.” The Japan Times, https://www.japantimes.co.jp/news/2023/01/24/asia-pacific/australian-colleges-chatgpt-disabled-students/. Accessed 30 May 2024