INTEGRATION OF ARTIFICIAL INTELLIGENCE IN THE EMPLOYMENT SECTOR – A THREAT OR A TECHNOLOGICAL REVOLUTION?

By Enes Sastoli*

Today, billions are being spent on the development of artificial intelligence (AI), and articles often emphasize the fact that people are concerned about the prospects of employment due to the progress and increasing popularity of AI. Depending on the location where these studies are conducted, the percentage of fear among the population can vary up to 70%. Due to advances at a very rapid pace, there is a sense that everything is changing overnight. Not everyone understands how artificial intelligence works and what exactly it is, contributing to the spread of prejudices, such as the threat of job loss from AI. The most recent case we are almost all familiar with is the release of ChatGPT by OpenAI. More and more renowned companies such as Microsoft, Meta, Google, Alibaba, etc., are creating similar models and presenting them as their next innovation. In order to reach as many users as possible, these companies, through marketing, have blurred the line between reality and illusion. Consequently, confusion has arisen as this technology is presented as possessing capabilities beyond its existing capacities.

 

How does it work?

Reframing models like ChatGPT as tools rather than ‘living beings’ can serve as an antidote to this misconception. It should be emphasized that the main goal of the field of artificial intelligence is to imitate human intelligence using machines. This field aims to develop intelligent systems that exhibit mental abilities such as consciousness, reasoning, problem-solving, memory, and creativity. One of the subfields of AI is “Natural Language Processing” (NLP), which is a method of computer analysis of human language that enables the understanding and interpretation of data for machines. Through NLP, information can be extracted from text-based data. Specific applications of NLP include so-called large language models (LLMs). These models are very large neural networks, and their more modern versions are built on Google’s transformer architecture. The data used to train them is large volumes of text, and in the case of ChatGPT, this data is mainly sourced from the internet. This training allows large language models to generate coherent and human-like responses, regardless of the lack of key elements of human intelligence. Often, these models provide users with inaccurate, hallucinated, and incoherent information due to training with data up to a limited period. Nevertheless, these models have shown to be very capable in processing and analyzing data, even in code generation.

 

Worldwide:

New techniques such as “prompt engineering” have been developed to make these large language models more effective in generating desired responses. The way our instructions are formulated and structured directly influences the responses we get from these models. Recently, some international universities are offering courses to familiarize students with “prompt engineering,” and many companies have added requirements for “prompt engineers.” Previous experience in this profession is not a criterion for employment, but experience in coding or in the field of computer engineering, as well as other skills such as business knowledge, communication, creativity, and critical thinking, can help candidates secure this position more easily. The field of application development is also progressing rapidly, and the necessary technologies for their construction are so numerous that it is difficult for a programmer to master them all. In this direction, artificial intelligence can be used to solve low-level coding problems and in automation, leaving space for programmers to focus on auditing and validating the built programs. Program development requires thorough planning and following several complex steps, meaning that human talent is crucial in managing this process.

 

An application:

An interesting use of models like GPT-3.5 or GPT-4 is the creation of interactive “dashboards” for data visualization. “Dashboards” consist of various graphics displaying the most important data of a dataset. This enables the effective and simple presentation of information to the public. For example, through such a dashboard, we can inform about the daily energy production in France and its resources. In this dashboard, we can include a “bar chart” to inform about the daily energy production in MW and its resources, while using a “pie chart” we can inform about the sectors of energy production in percentages. To create such dashboards, a specific workflow must be followed. Firstly, the data source must be found, then their processing must be done, and finally, the visualization method must be selected. The use of GPT models can automate and simplify this process. To use these models, registration on the OpenAI website must be done, and the so-called API key must be generated. Connecting models to data sources of different formats, their communication with the internet, the ability to choose between many different models, and the modification of their parameters are some of the many advantages that working with the API and coding in Python offer. These advantages, gradually mentioned, are also becoming available on the ChatGPT platform, although some of them are in the testing phase or require a monthly subscription to access. GPT models are also capable of identifying problems in data structure and providing suggestions for restructuring them. There is also the Langchain library, which, when used, allows problems in data structure to be automatically adjusted by so-called agents, without the need for user intervention. Regarding the visualization graphics of data, these large language models are capable of offering suggestions on which graphics would be more suitable for use and programming based on the analyses performed. Once the type of visualization is determined, we can also generate the code using GPT by writing detailed instructions based on the grammar of graphics and specifying the Python libraries with which we want the graphics to be built. As a final step, the generated code must be tested, as if there are errors and the necessary adjustments are not made, the graphic may not be functional. Even in this final step, GPT models prove valuable by assisting in identifying and correcting coding errors. This is useful for both coding-savvy individuals and those unfamiliar with it, as it saves time in identifying and correcting errors.

 

Revolution or threat?

As human beings, we are biologically limited in performing certain tasks and solving certain problems. Throughout our history, technology has accompanied us at every step, helping us overcome these barriers. Just as the problems we encounter every day evolve, technology must also evolve. In the spectrum of technological advancement, AI represents the next progress. The next generation of machines we are building is here to stay among us, with the potential to revolutionize entire industries, including the fields of data science and computer programs.

 

*Enes Sastoli is a UMT student in the  MSc in Computer Engineering.