-
Most recently I read Isaac Asimov's science fiction novels and know about his famous "Three Laws of Robotics" (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics). But I am not so convinced by his first rule "A robot may not injure a human being", especially when AI reaches status of Artificial General intelligence (AGI) or even Artificial Super intelligence (ASI). What if in the near future, among a group of AIs, some of them follow this no-harm rule but the rest don't? I am curious about what's the probability these AIs reach a consensus to start harming people? I would like to build some mathmetical/probability/simulation models to best fit the scenario and calculate the probability. Any thoughts or discussion will be very welcome.
00Follow -
In this blog, we will summarize the latex code for Probability Formulas and Equations, including Binomial Distribution, Poisson Distribution, Normal Gaussian Distribution, Exponential Distribution, Gamma Distribution, Uniform Distribution, Beta Distribution, Bernoulli Distribution, Geometric Distribution, Beta Binomial Distribution, Poisson Binomial Distribution, Chi-Squared Distribution, Gumbel Distribution, Student t-Distribution, Laplace Distribution, etc. And for multivariate distributions, we will also cover Multinomial Distribution, MultiVariate Normal Distribution, MultiVariate Gamma Distribution, MultiVariate t-Distribution and others.