Could you imagine an artifical mind actually trying to obey these? You can’t even get past #1 without being aware of the infinite number of things you could do cartesian-producted with all the consequential downstream effects of those actions until the end of time.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It’s not doable because it will eat away the profit margins. /s
No /s.
Could you imagine an artifical mind actually trying to obey these? You can’t even get past #1 without being aware of the infinite number of things you could do cartesian-producted with all the consequential downstream effects of those actions until the end of time.
No one ever explained why they had to obey those laws, in the 1st place…only that they had to.