Three Laws of Robotics

The Three Laws of Robotics were created by science fiction writer Isaac Asimov as rules to ensure robots would behave safely around humans. Here is a simple breakdown:

1. First Law: A robot must not harm a human or, through inaction, allow a human to come to harm.
This means robots are programmed to avoid causing injury or danger to people.

2. Second Law: A robot must obey orders given by humans, except if those orders would conflict with the First Law.
Robots should follow human instructions, as long as those instructions don’t lead to harming someone.

3. Third Law: A robot must protect its own existence as long as this does not conflict with the First or Second Laws.
Robots should look after themselves and avoid damage, but must prioritise human safety and obedience over their own survival.

These laws are fictional but are often discussed in real-life robotics and AI ethics as guidelines for building safe, responsible machines.