top of page

What if AI & Human don't agree?


The future is certainly bright as advancement in artificial intelligence is going to change everything we do, if we work in collaboration with A.I. constructively. No matter how helpful A.I. can become in doing the job, there is one thing very certain, i.e. human beings will not give up their ultimately authority and decision making to artificial intelligence. A.I. will be the best tool ever invented by human beings, and will remain as a 'tool' for a long, long time. What if A.I. & human beings do not agree in some decisions? Will A.I. still obey the decision by human if it projects such decision will be disastrous to mankind? If A.I. has to follow a rule that it is not allow to harm human beings, what will it do in situation like that? Inaction to stop human beings in making wrong decisions will breach this rule.

bottom of page