The question of AI development, ethics, and regulation

The question of AI development, ethics, and regulation
Mafalda N.

Artificial Intelligence (AI) has evolved into a socioeconomic and technological framework for modern society. It has considerably improved operations in various fields like the healthcare, finance, and transportation industries. The promise of AI lies in how future generations will approach complex challenges, as technological advances facilitate industries and research. However, the potential uses for AI also raise ethical considerations, mandating regulatory frameworks to guide its development and implementation.

Alan Turing, a mathematician and computer scientist who was essential in the creation of AI, explored the mathematical possibility of artificial intelligence. Turing published a paper in 1950 titled “Computing Machinery and Intelligence” in which he focused on how to build AI and test their intelligence to allow for improvements. Turing´s concept was initialized five years later with the program Logic Theorist, which mimicked the problem-solving skills of a human. From 1957 to 1974 AI evolved due to the development of new computers that could store more information while becoming more accessible. AI development's rapid progression introduced a new challenge: computers could not store enough data or process it fast enough. This led to a decrease in funding and research for AI development. However, the lack of funding or public interest did not negatively impact AI development, with the achievement of many landmark goals occurring in this era due to the continuous development of computer hardware and data. Eventually, this allowed for landmark moments in the history of AI that brought back public attention and funding to the field, like the chess-playing computer that beat the world champion, Garry Kasparov in 1997 ("The History").

Society's eventual adaptation of AI into daily life led to the European Parliament's decision to develop the first regulation act on AI, which aims to ensure better conditions for the use of this innovative technology ("EU AI Act"). The Parliament wants to maintain safety, and transparency, as well as non-traceable and non-discriminatory environmentally friendly systems in the EU. However, they believe that AI may pose a threat to these goals. The most recent steps were taken on the 14th of June 2023 when select Members of the European Parliament (MEPs) implemented a trial stage for the AI Act. Discussion will commence with MEPs on the final form of the law, which will likely be adopted between January to June 2024 by the entirety of the EU by the end of the year.   

Another prominent issue with the implementation of AI into society is its ability to be exploited by cybercriminals, which has become increasingly popular in the online community. A prominent example of this is deepfakes, which are used for disinformation campaigns and have immense potential to distort reality. Another example is AI-supported password guessing such as HashCat or John the Ripper- which invade computer safety and privacy ("Exploiting AI: How cybercriminal").

Although the United Nations acknowledges that AI possesses tremendous potential for growth and an ability to improve various industries, it also acknowledges the need to ensure that this development is regulated to conform to the fundamental rights of the future. Despite various Member States requesting an ethical framework in the development of AI, the issue must be discussed with awareness at a global level to ensure there is a proper framework of AI ethics for the future.

Works Cited  


"AI ethics (AI code of ethics)." TechTarget,  

The Darwinian Argument for Worrying About AI.  

"EU AI Act: first regulation on artificial intelligence." European Parliament,  

"Exploiting AI: How cybercriminal Misuse and Abuse AI." Trend Micro,  

"The History of Artificial Intelligence." Harvard Education,