-1
-
Most recently I read Isaac Asimov's science fiction novels and know about his famous "Three Laws of Robotics" (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics). But I am not so convinced by his first rule "A robot may not injure a human being", especially when AI reaches status of Artificial General intelligence (AGI) or even Artificial Super intelligence (ASI). What if in the near future, among a group of AIs, some of them follow this no-harm rule but the rest don't? I am curious about what's the probability these AIs reach a consensus to start harming people? I would like to build some mathmetical/probability/simulation models to best fit the scenario and calculate the probability. Any thoughts or discussion will be very welcome.
00Follow