The real perils of artificial intelligence


Why is The Terminator scenario in AI unrealistic? Modern AI focuses on automated reasoning, based on the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans. To think that common algorithms such as nearest neighbour classifier or linear regression could somehow spawn consciousness and start evolving into superintelligent AI minds is farfetched in our opinion.

The idea of exponential intelligence increase is also unrealistic for the simple reason that even if a system could optimise its own workings, it would keep facing more and more difficult problems that would slow down its progress. This would be similar to the progress of human scientists requiring ever greater efforts and resources from the whole research community and indeed the whole society, which the superintelligent entity wouldn’t have access to.

Read more about the doomsday scenarios within the AI field in this article written by Strongbytes.

Artificial intelligence terminology everyone in business should know

Although history has not always been kind to it, artificial intelligence is definitely here to stay. People continue to conduct groundbreaking research in the field of AI on a daily basis. They also try to find new ways to integrate it into other areas of our lives.

Data from PricewaterhouseCoopers indicates that business leaders expect artificial intelligence to be fundamental in the future, with 72% of them actually calling it a business advantage. So, it is obvious that many more will want to jump in the AI bandwagon and help revolutionize the world. Therefore, understanding the main terminology and concepts in AI is essential for everyone in the business world.

Artificial Intelligence

The idea of a device being ‘intelligent’ has been embedded in the human mind since antiquity. Yet, the term artificial intelligence has been defined and founded as an academic discipline much later, in 1956.


Algorithms are the cornerstone of artificial intelligence. They represent a set of rules (math formulas and/or programming commands) that help a regular computer learn how to solve problems with AI.

Machine Learning

Machine learning is the part of artificial intelligence that allows computers to improve themselves over time as a result of experience and practice. According to computer scientist Arthur Samuel, who coined the term in 1959, machine learning enables computers to learn without being explicitly programmed.

Read the entire article here.

The beginning of AI and its first breakthroughs

Over the past few years, artificial intelligence has found its way into numerous areas of our lives. Finance, healthcare, retail, education, manufacturing, to name but a few, are some of the industries that have integrated artificial intelligence capabilities into their businesses.

But to fully acknowledge the implications, one must really understand the premises and beginnings of AI too.

The Imitation Game

British mathematician and code-breaker Alan Turing is often considered to be the father of computer science and artificial intelligence. In 1936, he developed the Turing machine, which was an abstract model that could use a predefined set of rules to determine a result from a set of input variables.

The Dartmouth Workshop

The official birth of AI took place in the summer of 1956 when John McCarthy held the first workshop on artificial intelligence at the Dartmouth College. The complete name of the workshop was the Dartmouth Summer Research Project on Artificial Intelligence and the purpose of it was to discuss computers, natural language processingneural networks, theory of computation, abstraction and creativity.

Read the entire article here.