Do we need to know how to make a pizza to drive a car? No. Then why do we want to create such a thing? I am talking about AGI and how humans perceive it nowadays. By looking at its true definition from Wikipedia,
Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can.
Many researchers try every method to reach AGI. Some argue that it cannot be achieved; some suggest to divide the problem into subtasks then combine all subsets to create a general solution to all. They all agree that it will be an inevitably profound invention.
I would instead not to call it an invention, but a discovery. The invention is creating something that has not been exited before. It can be on purpose, on accident, whereas discovery is recognizing something that already existed in a unique way that nobody perceives before. When it comes to the light bulb, we can say it is an invention because it is a solid object that is new to the world. Similarly, we are saying “The discovery of America” because we know it was somewhere there before, we have just noticed after Columbus.
My favorite discovery of all times is Play-Doh. Yes, it is not an invention but a discovery because it was first meant to be a wallpaper cleaner before it has been marketed as children clay. Therefore, I believe AGI will not be a timeless invention but a discovery of a curious mind which can look at things differently.
Another point is that do we need to define what do we want to achieve? Nobody asks a definition of what fire to discover. So do we need to define what AGI do we need? However, Columbus expecting some land even before he had reached America. Wallpaper cleaner company defined their new target customers, children, and new experience when they remarketed it like “Play-Doh.” If we want to invent AGI, we must know what it can be capable of.
We are still unsure about when we reach AGI, yet we have started creating a negative bias against it long before today. I recently watched some AI-related movies, and I want to discuss what I think like an AI learner perspective.
Kubrick introduces us an AI can read lips in A space Odysses(1968). AI understands that humans are trying to shut it down, sees it as a threat to itself and tries to get rid of humans on the ship. Despite all of the bias, after 50 years, we have reached that level. Nobody seems to argue the side effects.
Ex-machina(2014) gives us a clue how do we understand we reach AGI, by using the Turing test. Can AI simulate emotions or actions to deceive us? Can AI love as we do? These questions have not been answered in the past five years. (Most probably not going to be answered in next five as well) If you want to ask these to AI, watch the interview with ERICA, a humanoid robot with AI. Nevertheless, maybe the movie itself an excellent example of why should not we create our secret startup in the middle of the forest. !
Ghost in the Shell (Anime 1995/ Movie 2017) is one oldest cult in AGI; The Wachowskis openly cited the anime movie as an inspiration for “The Matrix” too. What makes us different than AI? Is love only a human thing? How big is net? We all try to answer these questions listening Kenji Kawai’s stunning soundtrack. After all these biases and human hostility through generations, I am thrilled to see people still want to dig into this topic even harder than before. Do we need to define ASIMOVs rules in panic or hate the world AI even before it came out? No, but instead we all should differentiate science fiction and science, keep our motivation and work hard.