I want to wrap up some questions about AI, future of intelligence and machine learning and try to find answers.

1. Is AI going to take over the world?

Short answer: No, it is very unlikely. Long answer: This dystopian scenario has been the favourite science fiction topic. Human creates robots and artificial intelligence. Robots became more intelligent than humans and they kill humanity sake of their sustainability. It is unlikely to happen because let’s say robots would be evil in the future. Unless somebody hardcoded them: Kill humans, they do not have a motive to do so. There must be a mistake or a gap in the instructions or definitions. For instance, let’s look at the scenario that we tell robots to cure cancer and they thought to cure is killing humans because of no human = no cancer. As Staven Pinker said unless we did not give clear definition about what is the cure or forget to tell them do not kill, this is not possible. The other scenario would be Robots realizes that they are more intelligent than humans and they decide to take over the world. intelligence or IQ is not enough oneself to realise their level. We need consciousness to be aware of our state. Artificial intelligence can not reach the constness by themself so it is not possible to happen too.

2. Neural nets = Human neural system?

Short answer: NO, the Long answer: Yes, we indeed look human brain to be inspired. Just like Wright brothers observed how birds fly, but they did not create a copy of it, we look at the human neural system for inspiration. The way the human brain think and process is not related to deep learning neurons and layers. From my perspective, artificial neurons aim to give answers that minimize their error, but human tries to maximize the correctness. That is why we are generalizing even from a single example is enough for us.

3. Can we trust self-driving cars?

Short answer: Yes, but not today Long answer: We have trust issues here. I don’t want to be paranoid over scenarios but we never see the driving license of our Uber driver. We trust strangers because we are ok with the idea “ if someone at their 18s can pass a 15 min driving test than one can drive cars in the rest of their life.” We are more open to trusting humans even though we know they are not tested with every case. Albeit, we got panic in emergencies and more likely to fail in emotional levels. Whereas machines can be more stable. Indeed, today we are not there yet, but with enough data and sensors, I believe self-driving cars will not be a matter of trust in the future.

4. Will AI steal my job?

Short answer: Yes, and you will enjoy it Long answer: Who wants to work in jobs that do not improve oneself? There are many dead-jobs now replaced by machines and it saves lifetimes. Now, we can be more focused on ourself or be a part of more meaningful paths rather than our dead-end jobs. We can come up with new vacancies, new job titles with the evolution of technology. That way, we can create better, helpful and meaningful jobs to give us the best and the fastest information. I am not frightened of new jobs but I am a bit panicked about new coming conspiracy theories and political lobbies that prevent us to reach the future.

5. Can AI be legally responsible?

Short answer: Not now, maybe later Long answer: Today, we are at the point where we discuss to solve criminal cases, discrepancies and even legal discussions with AI. This will lead eventually the question, can we account AI? Can AI be considered as a legal entity? For now, our laws only let the certain type of institutions ti be title owner. But only if AI is accountable and transparent for its decisions, new laws can also approve AI as a legal entity. At least this is the conclusion of Turkish “Law in the AI era” report. You can read the title in Turkish