Artificial intelligence – a technology that thinks and acts like a human being. It's neither magic nor science fiction – but that's exactly what makes it so exciting. In this guide, you'll learn what artificial intelligence really is and why it's getting so much attention right now. Curious? Then let's get started!
The most important information in brief
- What is artificial intelligence (AI)? AI is a branch of computer science that enables machines to simulate human abilities such as learning, problem solving, and decision making.
- What types of AI are there? A distinction is made between weak AI (also known as narrow AI), which solves specific tasks (today's standard), and strong AI (AGI), which theoretically possesses human intelligence.
- How does AI work? Modern AI is mostly based on machine learning and neural networks. It independently learns patterns from large amounts of data instead of following rigidly programmed rules.
- What is the difference between AI and normal software? Classic software works deterministically (input A = always output B). AI works probabilistically (with probabilities) and can adapt to new, unknown data.
- Where is AI used? AI is revolutionizing numerous industries, including medicine (diagnostics), mobility (autonomous driving), industry (predictive maintenance), and e-commerce.
What is AI?
Fundamentally, AI can be defined as follows: Artificial intelligence (AI) is a technology that aims to simulate human-like thinking and decision-making processes.
But what does that mean in detail?
AI systems are programmed to perform tasks that typically require human intelligence. These include skills such as:
- Learning: AI can recognise patterns in data and use this knowledge to improve future decisions.
- Thinking and planning: AI analyses complex situations, makes predictions and develops strategies to achieve a specific goal.
- Problem solving: AI uses insights to overcome challenges. These range from simple optimisation problems to highly complex analyses.
- Perception and interaction: AI can understand language, analyse images and respond to environmental stimuli – comparable to human senses.
Recommended reading: Thanks to its human-like thought and decision-making processes, artificial intelligence can offer many advantages – but it also brings challenges. If you would like to learn more about the opportunities and risks of AI, take a look at our guide to the advantages of artificial intelligence!
Different types of artificial intelligence: weak and strong AI
Intelligent computer programs that simulate human-like thinking—many people will feel a cold shiver run down their spine at this idea. But first, let's give the all-clear: the scenario just described is strong AI, and so far, it doesn't exist outside of sci-fi movies. The AI we work with today is what is known as weak AI. But what exactly is the difference between these two types of artificial intelligence?
Weak AI: The Specialist
Weak AI, also known as narrow AI, is designed to perform a clearly defined task – and to do it really well.
Some characteristics of weak AI:
- It specializes in one task. So a chatbot will never suddenly be able to drive a car.
- Its intelligence is limited to patterns and data that have been fed to it through human training.
- It doesn't really understand what it's doing – it follows the algorithms that trained it.
Examples include ChatGPT, Alexa, facial recognition, and self-driving cars.
Strong AI: The universal thinker
Strong AI, or Artificial General Intelligence (AGI), is still only a theoretical concept. Strong AI would have the ability not only to solve specific tasks, but also to think, learn and act like a human being – regardless of its original programming.
Strong AI could:
- independently solve complex problems in a wide variety of fields,
- learn new tasks and even
- develop creative solutions.
It would be capable of mastering everything from medical diagnosis to poetry. For now, however, strong AI remains a topic for films and novels – and we humans remain in control.
What distinguishes AI from a conventional computer programme?
The main difference lies in the processing logic and the predictability of the results.
| Feature | Conventional computer programme | Artificial intelligence (AI) |
|---|---|---|
| Functionality | Rule-based: Follows exactly the "if-then" rules (algorithms) written by the programmer. | Data-driven: Independently recognizes patterns and correlations in data (models). |
| Result (output) | Deterministic: The same input always produces the exact same result. | Probabilistic: Works with probabilities; the result may vary or deviate "creatively." |
| Ability to learn | Static: Cannot learn anything new unless the source code is changed manually. | Adaptive: Can learn and improve through training with new data. |
| Data requirements | Works even with little or no data, as long as the logic is correct. | Often requires large amounts of data (big data) for training in order to be accurate. |
| Error type | Bugs: Errors arise due to incorrect programming or logical gaps (program crashes). | Hallucinations/bias: Errors arise from poor training data or incorrect pattern recognition (AI responds "incorrectly" but convincingly). |
AI is characterised by a certain unpredictability. But that is precisely what makes it “intelligent” – the ability to respond to the same input in different ways, much like a human being would.
AI technologies: How does artificial intelligence work?
From a technical perspective, modern artificial intelligence is primarily based on the subfield of machine learning. Unlike traditional software, which follows rigid rules, AI algorithms "learn" from large amounts of data in order to independently recognize patterns and derive solutions.
The functional process can be simplified into three core phases:
- Input & Training: The system is fed with huge data sets (big data) – be it text, images, or sensor values.
- Processing (deep learning): Using artificial neural networks modeled on the structure of the human brain, AI analyzes this data in complex computational processes.
- Output & optimization: The system delivers a result and compares it with the target value. The model continuously refines itself through error corrections.
Of course, this is a highly simplified explanation of how artificial intelligence works. If you would like to learn more, you will find a detailed explanation in our guide, ‘How does artificial intelligence work?’
The history of artificial intelligence
1950 - Alan Turing asks the question: "Can machines think?"
1956 - The birth of artificial intelligence
At the Dartmouth Conference, the very first AI conference, John McCarthy, Marvin Minsky and others coined the term "Artificial Intelligence". The conference is regarded as the official starting point for AI research.
1966 - ELIZA: The first chatbot
Joseph Weizenbaum develops ELIZA, a computer programme that simulates human conversations. It is often regarded as the first chatbot.
1970 - The "AI winter"
After initial hype, many projects fail due to a lack of computing power and unrealistic expectations. Funding is severely curtailed.
1980s - Upswing through expert systems
1997 - Deep Blue beats the world chess champion
The IBM supercomputer "Deep Blue" defeats the world chess champion Garry Kasparov - a historic moment for AI.
2011 - Watson wins "Jeopardy!"
IBM's AI "Watson" beats human opponents in the quiz show "Jeopardy!". A milestone in natural language processing.
2012 - Breakthrough through deep learning
A team led by Geoffrey Hinton wins the "ImageNet" competition with a neural network. This marks the beginning of the modern era of deep learning.
2016 - AlphaGo defeats Go world champion
Go was long considered an unsolvable problem for computers - until DeepMind's "AlphaGo" defeated world champion Lee Sedol. The victory is considered a breakthrough for reinforcement learning.
2018 - BERT revolutionises language processing
Google publishes BERT (Bidirectional Encoder Representations from Transformers), a model that significantly improves NLP (Natural Language Processing) and forms the basis of many of today's AI applications.
2022 - ChatGPT inspires the world
OpenAI launches ChatGPT, an AI tool that can generate human-like texts. It triggers a global discussion about the role of AI.
2023 - Generative AI in practice
Generative AI models such as DALL-E, MidJourney and ChatGPT are widely used in art, design, writing and programming.
The future of artificial intelligence
The future of AI will be less about hype and more about realism. According to the Gartner Hype Cycle, generative AI is facing a "downhill ride" – in other words, exaggerated expectations are slowly being replaced by realistic assessments. The focus will be on developing, evaluating, and prioritizing real use cases.
These use cases will then determine what infrastructure is really necessary to use AI efficiently and sustainably. It is therefore less about blindly following every trend and more about using AI where it creates real added value – whether in automation, data analysis or process optimisation.
Areas of application for artificial intelligence
- Medicine: Early detection of diseases, personalised therapies and more efficient diagnoses
- Mobility: Autonomous driving, traffic management and route optimisation
- Finance: Fraud detection, risk assessment and automated investment strategies
- Industry: Predictive maintenance, process automation and quality assurance
- E-commerce: Personalised product recommendations, chatbots and dynamic pricing
- Creative industries: Generating text, images and music or supporting design processes
The list is long, and developments are progressing rapidly. Whether AI is used in research, AI in business or in everyday life, AI is constantly opening up new possibilities.
Artificial intelligence has countless applications – the examples mentioned here are just a small selection. If you would like to learn more about how AI can be used specifically, read our guide: How can artificial intelligence be used?
Using AI technologies – shaping progress
Technology is advancing rapidly and constantly creating new possibilities. Whether in industry, production, or completely different fields, the opportunities offered by AI are virtually limitless.
Are you wondering how you can bring AI into your business to benefit from these opportunities? No problem – that's exactly what we're here for! At MaibornWolff, we help you find the right use cases, develop tailor-made solutions and successfully integrate AI into your business. Let's shape the future together!
Bring artificial intelligence into your company now.
And benefit from endless possibilities.
Frequently asked questions about artificial intelligence
What are the legal regulations for artificial intelligence in the EU?
The EU AI Act divides AI applications into three risk categories:
- prohibited applications (e.g. social scoring),
- high-risk applications (e.g. CV scanning) with strict requirements, and
- largely unregulated systems.
The aim is transparency, safety and ethics. The law is intended to become a global standard for AI use.
What ethical challenges does AI pose?
AI can reinforce biases, discriminate or make unfair decisions if it is based on incorrect or incomplete data. There is also a risk of misuse, for example for surveillance purposes. Responsible use and ethical guidelines are crucial to minimise such risks.
How does machine learning differ from deep learning?
Machine learning is a branch of AI based on algorithms that learn from data. Deep learning is a specialised method of machine learning that uses artificial neural networks to solve more complex tasks such as image or speech recognition. Deep learning usually requires more data and computing power.
Kyrill Schmid is Lead AI Engineer in the Data and AI division at MaibornWolff. The machine learning expert, who holds a doctorate, specialises in identifying, developing and harnessing the potential of artificial intelligence at the enterprise level. He guides and supports organisations in developing innovative AI solutions such as agent applications and RAG systems.