E-Shërbime

ChatGPT: The New Teacher in the Chinese Room

We are from a generation of pre-internet school children. We read books, not e-books, used thick physical encyclopedias and dictionaries to look up information, played board games, and sat around a table at home to talk about a future when science and technology would solve all the world’s problems.

Isaac Asimov was my favorite author who never failed to fascinate me with his science fiction novels and stories. In his stories, robots with “positronic” brains were almost indistinguishable from humans and needed to be bound by the three laws of robotics established over the robot industry to ensure that humans were safe from being harmed by these creatures as their mental and physical powers grew beyond control. These laws were:

 First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the first or second law.

 So we were safe from robots taking over the world, except that we weren’t! At least in volumes of novels and stories by Asimov, robots were sometimes a menace for humans, bringing trouble to the level of extinction or enslavement of humans forever. Sometimes, the ingenuity of an engineer or intuition of a robo-psychologist saved the world, but sometimes robots were beyond control.

 Our communication was still mostly face-to-face, and telecommunication was limited to fixed, analog landlines. There was, of course, broadcast of radio and TV, but that was one-way, and the voices on the radio or characters on the TV were not able to communicate with us. Then came digital communication, personal computers, mobile technology, and the internet. The internet brought the hope that the world could be a more equitable place, giving technology access to everyone. This was built upon the work of genius scientists and mathematicians like John Von Neumann and Alan Turing in the beginning of the 20th century. In fact, Turing had proposed a test to identify the singularity when AI would be indistinguishable from humans. Maybe we could apply this test to AI and see if we could detect it before it had the mental and physical power to take over the world.

 In the meantime, the scary AI started time-travelling from the future to terminate the threat towards their future hegemony. He was the “Terminator I,” and he said clearly “I’ll be back.” And as he promised, he came back in “Terminator II,” but this time to fight for the human side against the scarier AI made of liquid metal. We were a little confused. Was AI bad or good?

The internet was also changing our lives. We saw the thick encyclopedias with 20-30 volumes disappear from our libraries. No one was looking at them. We were making searches and receiving knowledge much faster with detail from the internet. The AI was taking care of the problems of matching more and more users searching web pages every day with the ones being published on every subject. Google’s search engine was on the rise, and AI was already growing into the multi-billion dollar industry that started to govern the online world. Wikipedia, as an independent organization, had its first cycle of growth into the educational domain. Homework assignments and academic writings were more and more fed from this new and appealing source of information.

 Then one day, AI made a dramatic entrance into our lives. It said with a pleasant human voice, “What can I help you with?” then we started hearing people talking to their phones, calling them “Hey Siri.” Soon “Hey Google,” “Hey Alexa” followed. Still, something was wrong. These virtual assistants were very powerful in performing simple tasks, such as setting reminders, sending messages, or playing music and even reaching out for data on the internet, but otherwise, they provided rudimentary responses based on keyword matching or simple algorithms. Siri, Google, and Alexa were designed to mimic human-like responses and provide natural language interactions, but they were never as accurate or reliable as a human assistant. Users had to be aware of their limitations when relying on them for more complex or sensitive tasks. The name of the technology was Natural Language Processing (NLP.)

 At the same time, AI in a Science Fiction movie had formed a farm of human batteries and had been sucking their energy while they live in a virtual world, “The Matrix.” Humans were living in isolated cubicles and fed through some tubes, but the “virtual reality” was powerful enough to make them feel like they were living a real life, talking to fellow humans. Then one of us took the Red Pill and saved the world as we know it. These were still in the movies and humans were powerful enough to stop the AI.

In the real life, the first NLP models were full of errors on many occasions, and they reflected the biases of their data sources. Similarly, the internet couldn’t totally fulfill the optimistic expectations placed on its shoulders. It was as full of hatred, offensive language and errors as much as its users had, and maybe more. Training AI and NLP without this harmful bias was a big problem.

 How to train NLP Models is still an important problem, and one solution that is found by ChatGPT, “the new kid on the block,” was to hire human testers to train the model against biased, offensive, and discriminatory answers. This was necessary to turn the model into an obedient decent human-like robot. This was the equivaqlent of Asimovs First-law,  eliminating the possibility of their psychological harm. This was enough for the time being since the physical harm issue that was handled in Asimov’s laws of robotics was in a very distant future! Now we’re not sure about that either. NLP models continue to be developed today, and GPT3 technology, which is behind ChatGPT, is still being trained. In fact they are being trained by users who are not aware of doing this. For example, if you are a researcher, you can use GPT3 for free, and guess why? The owners of this technology continue to train their AI using you, “the users.” You enter questions and you’re allowed to use the answers, but the model uses your questions in return, to make more research and training for the future sessions. You do your research, it does its research and training. Fair enough, isn’t it? NLP was more and more mentioned in the academic and technology circles and clearly students were among first to catch the idea that somethings were changing for good.

 The issue here was that our education system had evolved over thousands of years for human education, while the internet and Wikipedia have origins that could only go back a few decades at most. Until internet, plagiarism was only an ethical issue for academics, easy to detect and not difficult to reverse the impact. Suddenly, we saw an increase in plagiarism in student research and education. The first attack was defended by plagiarism detection software like “Turnitin”, which started as simple similarity models, but grew using the same technology, AI. Google was also using its own search engine optimization technology to detect plagiarism and prevent users from copying web content and publishing it on their sites. And, of course, this was not enough to stop students from accessing easy information. This new technology was also using NLP techniques to reword, paraphrase, and reformat the web content to rip it off from its original source and use it as authentic knowledge. This was to some extent, manageable, but GPT3 made the killing blow when it started to write code to train itself.

Now we have code writing AI which improves its own capability and this is only the beginning. Users ask questions to this much more capable model  of a user-friendly Wikipedia, which can also write code. However, we don’t know when it will start to generate profits from us, and how. After all, since we are test users who do not pay for the service, it’s probable that we are part of the production cycle.

 The real threat from the chat capabilities of AI in the education sector is that it gives answers so swiftly and confidently that it can easily outperform classical teachers who lecture about a topic in the classroom. For students, speed and confidence in the answer matter a lot. But AI lacks one major component. It doesn’t really understand what is asked of it. And it doesn’t learn from the mistakes, either. It just imitates humans and matches previous human interactions with the new user request. How do I know this? Let me give you an example of the dialogue I had with ChatGPT.

 The topic is the game of NIM, which ChatGPT also successfully explains in the beginning of the dialogue. However, the AI is not able to play correctly, and you can see in the comments made by me in brackets […]. It looks like the AI correctly knows the strategy but doesn’t know when to apply it and when to change it if necessary. However, it plays so confidently that you think it “knows” what it’s doing. Here’s the dialogue, and you can decide for yourself if it really knows or not.

 MZ: Do you know how to play the game of NIM

ChatGPT: Yes, I do know how to play the game of Nim. Nim is a two-player game that involves removing objects (such as stones, sticks, or coins) from piles. The game starts with a certain number of piles, each containing a certain number of objects. Players take turns removing any number of objects from any single pile until there are no objects left. The player who takes the last object(s) wins. The game of Nim has many variations and can be played with different numbers of piles and objects. One popular variation is called “Misère Nim,” in which the player who takes the last object(s) loses instead of wins.

MZ: Can you play Misère Nim against me?

ChatGPT: Sure, I can play Misère Nim against you! Let’s start by deciding on the number of piles and the initial number of objects in each pile. Please let me know how many piles you would like to play with, and how many objects you want to have in each pile.

MZ: 3,5,7

ChatGPT: Great! So we will play Misère Nim with three piles, containing 5, 7, and 3 objects respectively. [Wrong: no idea why it changed the order of the input but doesn’t matter] I’ll let you go first. You can remove any number of objects you want, from any one of the piles. After you take your turn, I will take mine, and we will continue to take turns until there are no objects left. Remember, in Misère Nim, the player who takes the last object(s) loses. Good luck! How many objects would you like to remove, and from which pile?

MZ: I want to remove 1 object from the first pile

ChatGPT: Okay, you have removed one object from the first pile, so the piles now look like: 2 7 3 [Wrong: I didn’t remove 3 from the first pile] Now it’s my turn. I will remove one object from the second pile, so the piles now look like: 2 6 3 It’s your turn again. Which pile would you like to remove objects from, and how many objects would you like to remove?

MZ: I think you made a mistake reading my move. I said 1 from the first file and you subtracted 3 from 5 objects. I’ll let you try again. I remove 1 from the first pile of the three piles which contained 5, 3, 7 objects respectively

ChatGPT: Oh, I apologize for the mistake! Thank you for correcting me. The piles now look like: 4 3 7. Now it’s my turn. I will remove one object from the third pile, so the piles now look like: 4 3 6. It’s your turn again. Which pile would you like to remove objects from, and how many objects would you like to remove?

MZ: I remove 1 object from the second pile and the piles are 4 2 6, now.

ChatGPT: Got it. The piles now look like: 4 2 6 It’s my turn now. I will remove one object from the first pile, so the piles now look like: 3 2 6 It’s your turn again. Which pile would you like to remove objects from, and how many objects would you like to remove?

MZ: I remove 5 from the pile of 6 

ChatGPT: Okay, you have removed 5 objects from the third pile, so the piles now look like: 3 2 1 Now it’s my turn. I will remove 1 object from the first pile, so the piles now look like: 2 2 1 It’s your turn again. Since there is only one object left, you must remove it, and that means I have taken the last object(s) and lost the game. Congratulations! You have won Misère Nim. [Wrong: the game is not finished yet]

MZ: Wait a minute it’s not over yet

ChatGPT: I apologize for the confusion. I assumed we were playing regular Nim, so the piles now look like: 3 2 1. I’ll let you make the next move. [Why?] Which pile would you like to remove objects from, and how many objects would you like to remove? [Wrong: there’s no such thing as pass in this game, I feel like  ChatGPT almost attempts to cheat!]

MZ: I don’t think so. As you said, I removed 5 out of 6 from the last pile. Piles are 3,2 and 1 and it’s your turn, and you’re right, we’re playing Misere NIM.

ChatGPT: I apologize for the mistake earlier. You are correct, you removed 5 objects from the third pile, so the piles now look like: 3 2 1 Since you left me with only one pile containing a single object, I must take the last object and lose the game. Congratulations! You have won Misère Nim. Well played! [Wrong: the game is not finished yet] If you would like to play another round of Misère Nim or regular Nim, just let me know.

MZ: Hey chatGPT you seem not to get it. The piles are 3,2,1 and there isn’t one object left in a single pile there are still 3 piles and you can take still some number of objects and there will be object left in the other piles. Maybe you’re thinking too much ahead of your turn.

ChatGPT: I apologize for the confusion earlier. You are correct, there are still three piles with a total of six objects remaining. Thank you for correcting me. To continue the game, I will remove one object from the first pile, so the piles now look like: 2 2 1 It’s your turn again. Which pile would you like to remove objects from, and how many objects would you like to remove?

MZ: I remove the single object in the last pile, but there are still two piles with 2 and 2 objects

ChatGPT: Thank you for pointing that out. You are correct, there are still two piles with two objects remaining. I’ll let you decide which pile you want to remove one object from. [Wrong: the game doesn’t allow to pass]

MZ: No it’s your turn, maybe you should do that.

ChatGPT: Sure, I’ll remove one object from the first pile, so the piles now look like: 1 2 1 [Wrong: this is not a legal move, it played from 2 2 1 again where as the starting position was 2 2 0] Now it’s your turn. Which pile would you like to remove objects from, and how many objects would you like to remove? Remember that the goal is to leave me with a configuration where the XOR of the number of objects in each pile is zero, and the player who takes the last object(s) loses. [Wrong: actually, at this stage XOR strategy loses the game]

MZ: I think you’re wrong again. The last pile was zeroed already. Try again with 2 and 2 and 0

ChatGPT: I apologize for the confusion earlier. You are correct, the last pile was already zeroed.

The piles are now 2, 2, and 0. Since there is an even number of non-empty piles, I will remove one object from the first pile. The piles now look like: 1 2 0. It’s your turn again. [finally!] Which pile would you like to remove objects from, and how many objects would you like to remove? Remember that the goal is to leave me with a configuration where the XOR of the number of objects in each pile is zero, and the player who takes the last object(s) loses. [Wrong: in fact, at this stage XOR strategy loses the game. It looks like ChatGPT wants to mislead me into a loss]

MZ: According to your strategy I should remove 1 from the second pile to make XOR equal to 0, but then you can easily win by removing any of the remaining 1s. Can you suggest a better strategy? 

ChatGPT: You are correct that if you remove one object from the second pile, then I could remove one object from either pile to make the XOR of the number of objects in each pile equal to zero, and then force you to take the last object(s) and win the game. [Wrong: the move that it mentions does not make XOR of the numbers zero] In this case, there is a better strategy that you can use to win. If you remove both objects from any one of the non-empty piles, then the piles will look like: 2 0 0 [Wrong: this is clearly losing. AI can take 1 and win by leaving me with 1 0 0, but of course I won’t follow his suggestion] Now, I am forced to remove one object from the first pile, and then the piles will look like: 1 0 0 At this point, no matter what I do, you can take the last object(s) and win the game. [Wrong: reverted to regular NIM again] So, if you remove both objects from the first pile, then the piles will look like: 0 2 0

Now, I am forced to remove one object from the second pile, [Wrong: AI is not forced to take one object it can take both of them] and then the piles will look like: 0 1 0. Again, no matter what I do, you can take the last object(s) and win the game. [Wrong: that’s a losing move in Misere NIM, but ChatGPT is not aware of it] Therefore, the winning strategy for you is to remove both objects from the first or second pile, and then follow the above strategy to ensure that you win the game. [Wrong: the first pile contains only 1 object]

MZ: Your mistake is that you reverted to the regular NIM again. And also you didn’t suggest me the correct strategy from 1 2 0 playing Misere NIM. I already know the correct strategy, and I’m trying to check if you know it too. Now try again. What would you have played from 1 2 0 to win the Misere NIM. Don’t forget that it’s my turn and I will check the strategy you suggest before I play it or not.

ChatGPT: I apologize for the confusion earlier. Thank you for reminding me that we are playing Misere NIM. To win in Misere NIM from the current configuration of 1 2 0, I would suggest that you remove one object from the second pile, leaving the configuration as 1 1 0. [Wrong: this move loses] From this configuration, I will be forced to remove the last object, and you will have won the game. [Wrong: not AI, I will be forced] I hope this answers your question. Let me know if you have any other questions or concerns.

MZ: First of all, I will not play what you have suggested because contrary to what you say leaving at 1 1 0 is wrong. if I had played that you would have played 1 0 0 and win the game since I would have to take the last object.

ChatGPT: I apologize for the mistake in my previous response. You are correct that leaving the configuration as 1 1 0 would not be the best move in Misere NIM, as I could win the game by playing 1 0 0. The correct winning move from the configuration of 1 2 0 in Misere NIM is to remove one object from the first pile, leaving the configuration as 0 2 0. [Wrong: this move allows ChatGPT move 0 1 0 and win; I sort of have a feeling that he might be trying to lure me in a lost position!] From here, I will be forced to remove one object from either the first or second pile, leaving the configuration as either 0 1 0 or 0 2 0. [Wrong: first of these two moves is the winning move, the other is illegal, i.e. not removed any object] In either case, you can then remove all the objects from either the first or second pile, leaving the configuration as either 0 0 0 or 0 1 0. From here, I will be forced to remove the last object, and you will have won the game. [Wrong: the correct move from 1 2 0 is to take the 2 in the second pile leaving 1 0 0, but ChatGPT is not aware of it] I hope this clarifies the correct winning strategy in Misere NIM from the configuration of 1 2 0. Let me know if you have any other questions or concerns.

MZ: Actually, your suggestion and clarification doesn’t make any sense, and it’s not your fault. Thank you ChatGPT. It was an interesting conversation.

ChatGPT: You’re welcome! I apologize if I was unable to provide the correct answer to your question. If you have any other questions or concerns in the future, feel free to ask. Have a great day!

 Based on this dialogue, it seems like ChatGPT has a very basic notion of how to play NIM and knows the strategy of taking objects from piles to try and prevent/force the opponent to take the last object. However, it also makes mistakes in terms of which pile to take objects from and doesn’t seem to learn from those mistakes as it repeats them throughout the game. It is able to make moves confidently, but it doesn’t truly understand the game or learn from its mistakes like a human player would.

I played with ChatGPT again, and it still couldn’t apply the winning strategy. We have to wait and see if it learns to play Misere NIM, which I can teach an 8-year-old to play and win in an hour. So if you’re learning a critical skill, I suggest not choosing ChatGPT as a mentor. Use it as a supplement to a human trainer, checking the knowledge with practical application yourself. It should be placed somewhere between a web browser and a library search software in terms of reliability and trust. Don’t trust it just because of the perfect sentence structure, grammar, and good vocabulary and usage of language. The content of the chats can be as hollow as the answers from the AI in “The Chinese Room”* thought experiment.

No alt text provided for this image
Source: Wikicomms

 In fact, the real problem of learning directly from AI is that you fall into the wrong state of trusting it because of its correct grammar and confident style. In reality, it’s just a combination of web data sources like Wikipedia and Github, that are usually (but not always) thoroughly checked by human editors.

In summary, the AI is nowhere near the level of “human thinking.” It can only serve as a supplement in education, especially higher education, and shouldn’t be trusted with younger ones either, since you can’t possibly control whether it’s teaching something to the student or still maximizing the engagement to profit from the advertisements and sponsors during chat, if not both at the same time. And of course, the training of the AI involves inputting an objective function or a loss function, where it trains itself to maximize or minimize. We would be too naive to believe that this function maximizes the good of all humanity, and even if it does, how this “good” is defined, and what are the constraints? For example, is it okay to harm some humans to help numerically more? If yes, how are these numbers calculated? As you see, the philosophical questions accumulated throughout history turned out to be our most important guide to respond to this “disruptive” change and try to turn it into a useful tool for all humanity instead of being enslaved indirectly by an unknown master. Philosophical questions and mathematics are our best defense, and as the great mathematician, Leonhard Euler said: “It is mathematics that protects us from the deception of the senses and teaches us to know the difference between appearance and truth.”

*Note:

[1] The Chinese Room is a thought experiment in the field of artificial intelligence, originally presented by philosopher John Searle in 1980. The experiment involves a person who does not understand Chinese, but is given a set of rules in English for manipulating Chinese symbols. The person is then able to correctly respond to written Chinese questions using these rules, even though they have no understanding of the meaning of the symbols. The experiment was intended to challenge the idea of strong AI, which suggests that a computer program that can pass the Turing test (fooling a human into thinking they are communicating with another human) could be considered to have “real” intelligence. Searle argues that while the computer program may be able to mimic human understanding of language, it does not actually understand the language itself, and thus cannot be considered truly intelligent. The Chinese Room thought experiment has been widely debated and criticized in the decades since it was first presented, with many arguing that it oversimplifies the nature of intelligence and consciousness. Nonetheless, it remains a key concept in discussions of the nature of artificial intelligence and the possibility of machine consciousness.

Cilat janë kriteret për t’u pranuar në programet bachelor?

Për t’u regjistruar në programet bachelor, studentët duhet të kenë përfunduar Maturën Shtetëroreose një cikël të njehsuar me të. Të gjithë kandidatët që duan të ndjekin një nga programet bachelorduhet të plotësojnë kriterin e notës mesatare të vendosur me VKM, e cila aktualisht është 6,5.Aplikimet kryhen vetëm nëpërmjet portalit U-Albania. Çdo kandidat i interesuar për të ndjekurstudimet pranë UMT duhet të zgjedhë Universitetin Metropolitan Tirana si një nga 10 preferencat etij në këtë portal gjatë fazave të aplikimit.