Yes, besides classical Asimov's laws, values are an essential aspect. But as long as humans lack values, we will hardly convey them to an AI.
I liked your approach that artificial intelligence has to be raised like a child, and it reminded me of movies like Blade Runner and Ex Machina. Thanks fot that nice scenario.
That is why ,I believe, we need to worry about the people first than AI or robots. I agree as you wrote simply " Any AI/machine learning is as good as its developer "
(for the time being though), hence, our focus must be on the developers
(people) of any kind. We need to have the developer(s) think and have concerns about such responsibility now.
Once in a discussion I mentioned that AI is very similar to give birth and raise a child. The parents are responsible of what kind of values the child will have and how it will be a part of the society, eventually humanity. The foundation is always the nest/home where the child is being developed (coded) into a fine person. If this job is done properly, then the surrounding social factors ( other people and the society) may not have a strong impact of this fine person crossing to the dark side. This may sound as wishful thinking but I still have faith in people :)
One of the problems is the basic understanding of AI as intelligence as we know it from humans with free will. So theoretically, any AI could become a danger to humans. I also think that Asimov's laws of robotics are elementary when it comes to the question of trust.
After all, there have been cases in the past where one AI programmed another AI, and in a few days, they had developed a secret language and had to be shut down. *
And even if the AI acts alone, it has been shown that it became right-wing radical after a few days due to the bad influence of humans. **
Any AI/machine learning is as good as its developer. And since humans are fallible, so will artificial intelligence.
Nikos, many thanks. "Gerd Leonhard's Technology vs. Humanity: The coming clash between man and machine" has been around since 2016, however still true as ever. I am glad that we are on the same page.
hello all, just joined the group and been checking the posts. Have been involved in a few projects related to EMobility ( EVs) and special designed industrial drones. I say that autonomous vehicles are more than ready and waiting for the ecosystem and the regulations, whereas "flying cars" ( actually bigger drones looking lake cars) are already in the air and be operating soon.
So my question is, can we relate these technologies and ask this question about autonomous vehicles : Can the technology be trusted ? Or is it too scary because it is on warplanes or in sci-fi movies.
Hi Kazim, one of my favourites - and near my views is Daniel Dennett: Why robots won't rule the world - Viewsnight - YouTube. In short the people who say that the technology could be trusted should be somehow accountable for what the technology causes. But the key question is trusted to do what?
Fascinating topic. I have dealt with the subject after watching some science fiction movies. The executive actions are digitized, but the victims are still human beings.
I think this is the beginning of Skynet (Terminator).
Yes, besides classical Asimov's laws, values are an essential aspect. But as long as humans lack values, we will hardly convey them to an AI.
I liked your approach that artificial intelligence has to be raised like a child, and it reminded me of movies like Blade Runner and Ex Machina. Thanks fot that nice scenario.
well said jana,
That is why ,I believe, we need to worry about the people first than AI or robots. I agree as you wrote simply " Any AI/machine learning is as good as its developer "
(for the time being though), hence, our focus must be on the developers
(people) of any kind. We need to have the developer(s) think and have concerns about such responsibility now.
Once in a discussion I mentioned that AI is very similar to give birth and raise a child. The parents are responsible of what kind of values the child will have and how it will be a part of the society, eventually humanity. The foundation is always the nest/home where the child is being developed (coded) into a fine person. If this job is done properly, then the surrounding social factors ( other people and the society) may not have a strong impact of this fine person crossing to the dark side. This may sound as wishful thinking but I still have faith in people :)
One of the problems is the basic understanding of AI as intelligence as we know it from humans with free will. So theoretically, any AI could become a danger to humans. I also think that Asimov's laws of robotics are elementary when it comes to the question of trust.
After all, there have been cases in the past where one AI programmed another AI, and in a few days, they had developed a secret language and had to be shut down. *
And even if the AI acts alone, it has been shown that it became right-wing radical after a few days due to the bad influence of humans. **
Any AI/machine learning is as good as its developer. And since humans are fallible, so will artificial intelligence.
Sources: * https://bit.ly/3G2wcuy ** https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
I am tempted to read Stanislaw Lem and Isaac Asimov on the Law of Robotics again: https://www.hyperkommunikation.ch/literatur/asimov_robot.htm
Sorry could only find a German reference.
Nikos, many thanks. "Gerd Leonhard's Technology vs. Humanity: The coming clash between man and machine" has been around since 2016, however still true as ever. I am glad that we are on the same page.
hello all, just joined the group and been checking the posts. Have been involved in a few projects related to EMobility ( EVs) and special designed industrial drones. I say that autonomous vehicles are more than ready and waiting for the ecosystem and the regulations, whereas "flying cars" ( actually bigger drones looking lake cars) are already in the air and be operating soon.
So my question is, can we relate these technologies and ask this question about autonomous vehicles : Can the technology be trusted ? Or is it too scary because it is on warplanes or in sci-fi movies.
Your comments are much appreciated.
Fascinating topic. I have dealt with the subject after watching some science fiction movies. The executive actions are digitized, but the victims are still human beings.
I think this is the beginning of Skynet (Terminator).