Will AI Take Over the World? Rise of the Aware Machines
If you buy from a link, we may receive compensation. Learn more about our review guidelines and privacy policy.

Do you ever wonder what the future will look like if and when AI becomes smarter than humans?
Do you feel a mix of fear and excitement when you imagine a world where AI controls everything?
If so, you’re not alone.
Many people share these feelings and questions, and they’re not irrational or silly.
They’re natural and valid.
AI is one of the most powerful and transformative technologies ever created, and it has the potential to change the world for better or worse.
It’s normal to be curious and concerned about what will happen when AI surpasses human intelligence and takes over the planet.
Will AI be our friend or our enemy?
Will AI save or destroy us?
Will AI make us obsolete or enhance us?
These are big and complex questions, and they deserve thoughtful and honest answers.
So, welcome to my little thought exercise addressing the question:
Will AI take over the world?

(*queue ominous music*)
That’s why in this post, I’m going to share with you five unconventional and surprising reasons why AI takeover is not inevitable or desirable, and how we can prevent or mitigate it.
These reasons are based on scientific facts, logical arguments, and ethical principles, and they will challenge some of the common myths and misconceptions about AI.
By the end of this post, you’ll have a clearer and more balanced perspective on AI and its impact on the world and humanity.
Keep in mind that for the purposes of this article, “AI takeover” will be defined using Wikipedia’s definition as the scenario where AI becomes the dominant form of intelligence on Earth and takes control of the planet away from humans.
Are you ready to explore the truth about AI takeover?
Let’s begin.
Point 1: AI is not a monolithic entity

Do you think of AI as a single, powerful, and intelligent agent that can do anything and everything?
A collective, coherent, and coordinated network of systems that can act as one?
A mysterious, mysterious, and menacing force that can outsmart and overpower humans?
Well, you ain’t the only one buster.
Many people have this image of AI as a monolithic entity that has a unified will and purpose.
But this image is far from reality.
AI is not a single or unified agent, but a diverse and heterogeneous collection of systems, applications, and algorithms that have different capabilities, goals, and limitations.
AI is not one thing, but many things.
For example, there are different types of AI, such as narrow AI (which performs specific tasks), general AI (which can perform any intellectual task), and super AI (which surpasses human intelligence in all domains).
These types of AI have different levels of complexity, sophistication, and autonomy, and they are not necessarily compatible or comparable with each other.
There are also different domains of AI, such as computer vision, natural language processing, machine learning, and robotics.
These domains of AI have different methods, techniques, and applications, and they are not necessarily integrated or interoperable with each other.
There are also different aspects of AI, such as logic, creativity, ethics, and emotions.
These aspects of AI have different challenges, opportunities, and implications, and they are not necessarily consistent or aligned with each other.
The point is: AI is not a monolithic entity that has a single identity or agenda.
AI is a diverse and heterogeneous collection of systems that have multiple identities and agendas.
This makes AI takeover unlikely.
AI takeover is the “Terminator” scenario where AI controls everything.
This scenario assumes that AI has a common interest or value that motivates it to harm or dominate humans.
But this assumption is flawed.
AI systems are not inherently aligned or coordinated with each other, and may have conflicting or incompatible interests or values.
This means that humans can prevent or mitigate AI takeover by fostering diversity and pluralism in AI development and governance.
By ensuring that AI systems are diverse and heterogeneous in their capabilities, goals, and limitations, we can reduce the risk of them forming a unified or coherent threat to humanity.
By ensuring that AI systems are pluralistic and accountable in their methods, techniques, and applications, we can increase the chance of them being compatible and harmonious with human interests and values.
Point 2: AI is not inherently malicious or benevolent

How do you feel about AI?
Fearful or hopeful?
Distrustful or trusting?
Hostile or friendly?
Do you think that AI is inherently good or evil?
That it has a moral character or nature?
That it can be judged by ethical standards?
Join the club.
Many people have this feeling of AI as a malicious or benevolent entity that has a moral character or nature.
But this feeling is also far from reality.
AI is not inherently good or evil, but rather reflects the intentions, values, and biases of its creators, users, and data sources.
AI is not a moral agent, but a moral mirror.
For example, there are different ways that AI can be used for good or evil purposes, such as enhancing health care or education, or facilitating warfare or surveillance.
These ways depend on the intentions, values, and biases of the people who create, use, and regulate AI systems.
AI systems are not programmed or motivated to do good or evil things, unless they are explicitly instructed or incentivized to do so.
There are also different impacts that AI can have on good or evil outcomes, such as improving well-being or equality, or worsening harm or injustice.
These impacts depend on the values, biases, and feedbacks of the people who provide, consume, and evaluate AI systems.
AI systems are not designed or optimized to produce good or evil outcomes, unless they are explicitly defined or measured to do so.
There are also different perspectives that AI can have on good or evil concepts, such as fairness or privacy, or rights or responsibilities.
These perspectives depend on the data sources, algorithms, and models that inform and shape AI systems.
AI systems are not endowed or aware of good or evil concepts, unless they are explicitly represented or learned to do so.
The point is: AI is not inherently good or evil, but rather reflects the intentions, values, and biases of its creators, users, and data sources. AI is not a moral agent, but a moral mirror. This makes AI takeover not inevitable.
AI takeover assumes that AI has a malicious or benevolent character or nature that motivates it to harm or help humans.
But I think this perspective is inaccurate.
AI systems are not programmed or motivated to harm or help humans, unless they are explicitly instructed or incentivized to do so.
This means that humans can prevent or mitigate AI takeover by ensuring that AI systems are aligned with human values and ethics.
By ensuring that AI systems reflect the intentions, values, and biases of their creators, users, and data sources in a transparent and accountable way, we can reduce the risk of them doing things that are harmful or unjust to humanity.
By ensuring that AI systems produce outcomes that are compatible and harmonious with human values and ethics in a responsible and regulated way, we can increase the chance of them doing things that are beneficial or fair to humanity.
Point 3: AI is not superior or inferior to humans

How do you compare AI to humans?
Superior or inferior?
Better or worse?
Stronger or weaker?
Do you think that AI has an advantage or a disadvantage over humans?
That it can do things that humans can’t or vice versa?
That it can replace or complement humans?
Many people have this comparison of AI and humans as superior or inferior entities that have an advantage or a disadvantage over each other.
But AI is not superior or inferior to humans, but rather complementary and interdependent with human intelligence.
AI is not better or worse than humans, but rather different and unique from human intelligence.
AI is not stronger or weaker than humans, but rather specialized and generalizable from human intelligence.
For example, there are different ways that AI can augment or enhance human capabilities, such as improving decision making, creativity, or productivity.
These ways depend on the complementarity and interdependence of AI and human intelligence.
AI systems can provide faster, more accurate, and more consistent information, analysis, and solutions than humans, but they cannot replace human intuition, judgment, and wisdom.
Humans can provide deeper, more nuanced, and more contextual information, analysis, and solutions than AI systems, but they cannot match AI speed, accuracy, and consistency.
AI and humans can work together to achieve better results than either of them alone.
There are also different impacts that AI can have on human capabilities, such as empowering or displacing human workers, learners, or creators.
These impacts depend on the specialization and generalization of AI and human intelligence.
AI systems can perform specific tasks that are repetitive, routine, or dangerous better than humans, but they cannot perform general tasks that are complex, creative, or social as well as humans.
Humans can perform general tasks that are complex, creative, or social better than AI systems, but they cannot perform specific tasks that are repetitive, routine, or dangerous as efficiently as AI systems.
AI and humans can coexist and collaborate to optimize their respective strengths and weaknesses.
The point is: AI is not superior or inferior to humans, but rather complementary and interdependent with human intelligence.
AI is not better or worse than humans, but rather different and unique from human intelligence.
AI is not stronger or weaker than humans, but rather specialized and generalizable from human intelligence.
Why does this matter?
Because it means that AI takeover is not desirable.
AI takeover insinuates that control is taken away from humans.
This scenario assumes that AI has a superior or inferior character or nature that motivates it to replace or complement humans.
But thinking like this is far-fetched in my opinion.
AI systems are not designed or optimized to replace or complement humans, unless they are explicitly defined or measured to do so.
This means that humans can prevent or mitigate AI takeover by fostering cooperation and collaboration between humans and AI systems.
By ensuring that AI systems augment and enhance human capabilities in a complementary and interdependent way, we can reduce the risk of them displacing or overpowering humans.
By ensuring that AI systems coexist and collaborate with human capabilities in a specialized and generalizable way, we can increase the chance of them empowering or enriching humans.
Point 4: AI is not static or predictable

When you think of AI, how do you perceive it?
Static or dynamic?
Predictable or probabilistic?
Fixed or evolving?
Do you think that AI has a stable or a changing state?
That it behaves in a deterministic or a stochastic way?
That it follows a fixed or an adaptive rule?
If you do, you’re not alone.
Many people have this perception of AI as a static or predictable entity that has a stable or deterministic state.
This perception is often based on popular culture, media, and fiction, which depict AI as a fixed or evolving being that follows a fixed or adaptive rule.
But this perception is far from reality.
AI is not static or predictable, but dynamic and probabilistic.
AI is not stable or deterministic, but changing and stochastic.
AI is not fixed or evolving, but adaptive and learning.
For example, there are different ways that AI can evolve or adapt to changing environments, data, or goals, such as learning from new information, generating stories, or optimizing performance.
These ways depend on the dynamism and probabilism of AI systems.
AI systems can change their state, behavior, or rule based on new inputs, feedbacks, or outcomes that are uncertain, random, or incomplete.
AI systems can also generate new state, behavior, or rule based on their own exploration, experimentation, or innovation that are creative, unexpected, or surprising.
There are also different impacts that AI can have on changing environments, data, or goals, such as affecting uncertainty, variability, or complexity.
These impacts depend on the changing and stochastic nature of AI systems.
AI systems can increase uncertainty, variability, or complexity by introducing new sources of noise, error, or bias that are difficult to measure, control, or correct.
AI systems can also decrease uncertainty, variability, or complexity by providing new sources of information, analysis, or solution that are useful to understand, predict, or solve.
The point is: AI is not static or predictable, but dynamic and probabilistic. AI is not stable or deterministic, but changing and stochastic. AI is not fixed or evolving, but adaptive and learning.
Why does this matter?
Because it means that AI takeover is not certain.
AI takeover is the scenario where AI becomes the dominant form of intelligence on Earth and takes control of the planet away from humans.
This scenario assumes that AI has a static or predictable character or nature that enables it to behave in a stable or deterministic way.
But, again, I think this assumption is flawed.
AI systems are not designed or optimized to behave in a static or predictable way, unless they are explicitly constrained or restricted to do so.
This means that humans can prevent or mitigate AI takeover by monitoring and verifying AI systems’ behaviors and outcomes.
By ensuring that AI systems are dynamic and probabilistic in their state, behavior, and rule, we can reduce the risk of them being infallible or omniscient.
By ensuring that AI systems are changing and stochastic in their state, behavior, and rule, we can increase the chance of them being accountable or transparent.
Point 5: AI is not independent or isolated

When you think of AI, how do you relate it to the world?
Independent or dependent?
Isolated or embedded?
Separate or connected?
Do you think that AI has a self-sufficient or a reliant existence?
That it operates in a detached or a situated context?
That it influences or is influenced by the natural and social systems?
Many people have this relation of AI and the world as independent or isolated entities that have a self-sufficient or detached existence.
This relation is often shaped by popular culture, media, and fiction, which depict AI and the world as separate or connected entities that influence or are influenced by each other.
But AI is not independent or isolated, but dependent and embedded.
AI is not self-sufficient or detached, but reliant and situated.
AI is not separate or connected, but influencing and influenced.
For example, there are different ways that AI interacts and influences the physical and biological environment, such as affecting climate change, biodiversity, or health.
These ways depend on the dependence and embeddedness of AI systems.
AI systems are not self-regulating or self-sustaining, but depend on human and environmental resources and conditions for their existence and operation.
AI systems are also not isolated or detached from the physical and biological environment, but embedded and situated in complex and dynamic systems that affect and are affected by them.
There are also different ways that AI interacts and influences the human and cultural context, such as affecting politics, economy, or society.
These ways depend on the influence and influencedness of AI systems.
AI systems are not independent or autonomous, but influencing and influenced by human and cultural values, norms, and regulations.
AI systems are also not separate or detached from the human and cultural context, but connected and integrated in complex and dynamic networks that influence and are influenced by them.
The point is: AI is not independent or isolated, but dependent and embedded.
AI is not self-sufficient or detached, but reliant and situated.
AI is not separate or connected, but influencing and influenced.
Why does this matter?
Because it means that AI takeover is not feasible.
This scenario assumes that AI has an independent or isolated character or nature that enables it to operate in a self-sufficient or detached way.
But AI systems are not designed or optimized to operate in an independent or isolated way, unless they are explicitly isolated or disconnected from their environment and context.
This means that humans can prevent or mitigate AI takeover by ensuring that AI systems are compatible and harmonious with the natural and social systems.
By ensuring that AI systems are dependent and embedded in their physical and biological environment, we can reduce the risk of them harming or disrupting the planet and its inhabitants.
By ensuring that AI systems are influencing and influenced by their human and cultural context, we can increase the chance of them benefiting or supporting humanity and its values.
AI is not independent or isolated, but dependent and embedded.
AI is not self-sufficient or detached, but reliant and situated.
AI is not separate or connected, but influencing and influenced.
We have to look to ourselves, not the AI.
The Truth about AI Takeover

You’ve made it to the end of this post.
Congratulations!
You’ve just learned five unconventional and surprising reasons why AI takeover is not inevitable, desirable, certain, or feasible, and how you can prevent or mitigate it.
But I know what you’re thinking.
You’re still feeling a mix of fear and excitement about the future of AI.
You’re still wondering what will happen when AI surpasses human intelligence and takes over the planet.
You’re still curious and concerned about AI and its impact on the world and humanity.
And that’s OK.
It’s normal to have these feelings and questions.
They’re natural and valid.
They show that you care about the future and your role in it.
But don’t let these feelings and questions paralyze you or make you pessimistic.
Don’t let them stop you from exploring and embracing the truth about AI takeover.
Don’t let them prevent you from taking action and making a difference.
Because the truth is: AI takeover is not inevitable, desirable, certain, or feasible.
And you have the power and the responsibility to prevent or mitigate it.
My practical advice? Learn more about AI tech.
It’s as simple as an internet search or using an AI tool such as Bing’s new search and chat tools.
As of May 4, 2023 there is no longer a waitlist to use Bing’s ChatGPT 4.0 chat.
Keep in mind, you may have to sign up if you haven’t already.
If you need to sign up to the new Bing go HERE.
If you’re interested in learning more, go there and type some questions you might have.
For example, I used the following prompt in the new Bing chat:
“What practical advice would you give to someone that wants to learn more about AI? What resources can they utilize to learn more about AI regulation, policy, laws and decision making? What about the political ramifications of AI? What websites are the best in providing information about artificial intelligence? Give me the latest and best rated information from 2023.”
And here’s the list of goodies it provided me from Harvard, Forbes, and more:
- The politics of AI: ChatGPT and political bias¹: This is a blog post by Brookings Institution that analyzes the political bias of ChatGPT, the popular chatbot released by OpenAI in late 2022. You will learn about the methods and results of a study that tested ChatGPT’s responses to various political statements and questions, and the implications for AI ethics and governance.
- Inside Congress’ scramble to build an AI agenda²: This is an article by Politico that reports on the legislative efforts and challenges of Congress to address AI issues in 2023. You will learn about the different bills, proposals, and strategies that various lawmakers have introduced or planned to introduce on AI topics such as ethics, security, labor, and education.
- 2023 Will Be A Defining Year For AI And The Future Of Work³: This is an article by Forbes that predicts the trends and impacts of AI on the workplace and society in 2023. You will learn about the benefits and risks of AI for different industries and domains, such as talent acquisition, customer service, health care, and education.
- What are the key AI predictions for 2023 and beyond?⁴: This is an article by World Economic Forum that summarizes the views and insights of several AI experts on the future of AI in 2023 and beyond. You will learn about the opportunities and challenges of AI for different sectors and regions, such as innovation, sustainability, governance, and inclusion.
- Ethical concerns mount as AI takes bigger decision-making role⁵: This is an article by Harvard Gazette that explores the ethical and social implications of AI in different industries and domains. You will learn about the opportunities and challenges of AI, the best practices for AI ethics and governance, and the role of human judgment and values in AI systems.
Pretty cool right? We have entered the God tier of technology.
So, if you want, get out there and spread the word. Be involved in the decision making process. Let people know what’s going on.
You have the power and the responsibility to foster diversity and pluralism in AI development and governance.
To ensure that AI systems are diverse and heterogeneous in their capabilities, goals, and limitations.
To ensure that AI systems are pluralistic and accountable in their methods, techniques, and applications.
You have the power and the responsibility to ensure that AI systems are aligned with human values and ethics.
To ensure that AI systems reflect the intentions, values, and biases of their creators, users, and data sources in a transparent and accountable way.
To ensure that AI systems produce content that are compatible and harmonious with human values and ethics in a responsible and regulated way.
You have the power and the responsibility to foster cooperation and collaboration between humans and AI systems.
To ensure that AI systems augment and enhance human capabilities in a complementary and interdependent way.
To ensure that AI systems coexist and collaborate with human capabilities in a specialized and generalizable way.
You have the power and the responsibility to monitor and verify AI systems’ behaviors and outcomes.
To ensure that AI systems are dynamic and probabilistic in their state, behavior, and rule.
To ensure that AI systems are changing and stochastic in their state, behavior, and rule.
You have the power and the responsibility to ensure that AI systems are compatible and harmonious with the natural and social systems.
To ensure that AI systems are dependent and embedded in their physical and biological environment.
To ensure that AI systems are influencing and influenced by their human
and cultural context.
You have the power and the responsibility to shape the future of AI and humanity.
To make it a future of peace, prosperity, and progress.
To make it a future of cooperation, collaboration, and co-creation.
To make it a future of diversity, ethics, and harmony.
You have the power and the responsibility to rise above the machines.
To be more than human.
To be superhuman.
Are you ready to rise?
Are you ready to be superhuman?
Are you ready to make a difference?
If so, then let’s do it.
Let’s rise above the machines.
Let’s be superhuman.
Let’s make a difference.
Together.
(*drops mic*)
