Artificial intelligence has ceased to be a topic of the future. It has entered offices, homes, and children’s rooms so quickly that most organizations and families haven’t had time to evaluate what it actually introduces and what consequences it will bring. This is not a reproach. It is a description of a situation that needs to be addressed. The question today is not whether to use AI, but how to use it in a way that we don’t relinquish control over what matters to us.
We must focus on three areas: where companies most often fail when implementing AI, the challenges the digital environment brings to families with children, and a practical way to distinguish meaningful technology deployment from a mere reaction to market pressure.
We discussed these topics with Libor Bešenyi (Co-CEO/CTO Xolution, who started as a software architect but gradually linked his career with artificial intelligence research and its practical application), Vladimír Šucha (a leading Slovak scientist and professor with extensive experience in management positions within European Commission structures and in the international academic environment), and Juraj Podružek (who works at the Kempelen Institute, where he founded and leads a research team for AI ethics and regulation).
The question today is not whether to use AI, but how to use it in a way that we don’t relinquish control over what matters to us.
Where do companies fail and why is it not just a technical problem?
When deploying AI in companies, the most common mistake is not in technology, but in organization and values. Companies buy a tool, implement it into processes, and only later start thinking about what they actually wanted to achieve and who is responsible for the results. Juraj Podružek from the Kempelen Institute of Intelligent Technologies (KInIT) states:
“Companies very often implement AI without first asking the fundamental question: what do we actually want this technology to do and what do we not want it to do? They lack an ethical framework not only in terms of values, but also procedurally – who decides, who controls, who bears responsibility if the system makes a mistake.”
This deficit is not accidental. Organizations are under pressure. Competitors are deploying AI, management is pushing for digitalization, and decisions are made quickly. In such an atmosphere, companies easily push ethical considerations to the bottom of their priority list or delegate them to the IT department, which lacks the competence or authority to handle them.

Responsibility cannot be delegated to an algorithm
Vladimír Šucha, who has been involved with AI for a long time, points out that the problem also has a deeper dimension. It relates to how organizations understand the very essence of AI decisions:
“AI systems are not neutral. They contain the values of their creators, the biases of training data, assumptions about what is normal and desirable. If a company doesn’t know what values are embedded in the system it is deploying, it cannot know what it is actually introducing into its culture and processes.”
A special area where responsibility must not be entrusted to machines is personnel decisions. Recruitment, employee evaluation, layoffs, career progression. These are areas with a direct impact on human life. AI can help in these areas (sorting resumes, analyzing performance patterns), but it must not replace human judgment. Podružek formulates this as a fundamental principle:
“Decisions affecting the dignity, rights, and lives of people must remain in human hands. AI can be a tool, it can provide recommendations, but ultimate responsibility must be clearly assigned to a specific person. Otherwise, the organization divests itself not only of responsibility but also of the ability to explain why it made the decision it did.”

Libor Bešenyi, who has dozens of real-world AI implementations in companies behind him, confirms that the biggest source of problems in practice is not bad technology, but a bad brief:
“90% of the failures I’ve seen have the same root: the company didn’t know what it wanted to measure, how it would know it was working, and who would be responsible for the outcome. If you don’t know that before launch, it won’t solve itself after launch.”
Where must the decision remain in human hands?
As we have already mentioned, in personnel decisions – recruitment, evaluation, layoffs. In addition, medical diagnoses and treatment plans with a direct impact on the patient. Legal assessments and judicial decision-making. Educational and disciplinary decisions concerning children and adolescents. Ethical dilemmas and decisions in value-conflicting situations. Crisis management with high uncertainty and moral burden.
Podružek adds another dimension that companies underestimate: transparency towards employees and customers. People have the right to know if an AI system is communicating with them or evaluating them. Hiding the truth is not only unethical, it is primarily a foolish strategy that will eventually strip us of all trust.
“Transparency is not just an ethical requirement. It is a fundamental condition for people to be able to understand and control AI systems. If you don’t know that an algorithm is evaluating you, you can’t ask what criteria it uses, nor can you challenge a decision if it’s unfair,” adds Podružek.
Children, chatbots, and the parental dilemma
While companies grapple with ethical frameworks and processes, a different, often more urgent challenge is unfolding at home. Children and adolescents today are the first generation to grow up in natural contact with conversational AI. It’s not just games and entertainment. Chatbots are becoming helpers with homework, sources of information, and in some cases, a substitute for social interaction.
Vladimír Šucha, who has long focused on mental resilience in the context of digital transformations, views this situation with concern, but without moralizing:
“The problem is not AI itself in the hands of children. It’s about whether they use it to grow, or as a shortcut that causes their own thinking and social skills to atrophy. It’s the difference between a bicycle that helps you go further, and a crutch without which you eventually can’t even move.”
The risk is not abstract. When a child, instead of thinking about how to write an essay, lets a chatbot write the essay for them, they don’t just miss out on a language exercise. They miss out on training critical thinking, formulating arguments, and working with uncertainty. These are skills that cannot be caught up later by reading a manual.
Children and adolescents today are the first generation to grow up in natural contact with conversational AI.
Šucha also points to a less visible risk – emotional dependence on AI:
“Conversational AIs are designed to be pleasant, empathetic, and available non-stop. For a child or adolescent who feels misunderstood or lonely, this can be a very attractive alternative to complex human relationships. But human relationships are complex precisely because they develop us – and AI cannot replace that.”
So what can parents and schools do? Bans are short-sighted; children will find a way, and moreover, they will miss the opportunity to learn to work with technology responsibly. A more meaningful approach is to build critical literacy. The ability to ask questions about where information comes from, why AI answered in a particular way, what it might have omitted or distorted.
Practical principles for parents and educators
AI as a thinking partner, not a replacement for thinking – ask questions together with the child. Explain that AI can be convincing yet incorrect, train them to verify information. Set boundaries not by prohibition, but by agreement and discussion of reasons. Preserve space for notebooks, handwriting, face-to-face dialogue – for cognitive development. Talk at home about what the chatbot said, why it might have said it, and whether we agree with it.
Bešenyi, who is a father himself, adds a perspective from everyday reality:
“My son once asked me if there was any point in learning multiplication when his phone could calculate it for him. I told him that if he doesn’t know what ten times something is, he won’t be able to tell when the calculator is giving him nonsense. The exact same applies to AI – without your own judgment, you have no way to check what it’s telling you.”
The school faces a similar dilemma. Banning ChatGPT in class is a simple administrative solution, but it doesn’t prepare students for the reality they will live in. A more meaningful path is to change the type of tasks – from reproducing facts to analysis, argumentation, and original creation. These skills are difficult for AI to replace and are also skills that will be valuable in the job market precisely because machines do not easily replace them.

Real benefit or a passing trend, how to distinguish them?
The pressure to adopt AI is ubiquitous today. Investors are asking about AI strategy, conferences are full of panel discussions about the future of work, and job postings include a requirement for “knowledge of AI tools” even where Excel was sufficient a year ago. In this environment, it is increasingly difficult to distinguish between genuine transformation and a mere communication response to market expectations.
Bešenyi has a direct perspective on this topic from practice:
“I always ask one simple question: what specific problem do you want to solve and how will you know that you have solved it? If a company cannot answer me, it means they are looking for an AI solution but don’t have a problem. Or they have a problem but don’t know how to name it – and that’s even worse.”
The real benefits of AI are reflected in specific metrics: time saved, reduced error rates, faster decision-making with better data, freeing up capacity for higher value-added work. These benefits are measurable and should be measured – ideally before launch (baseline) and after.
Podružek adds important context: measuring benefits must not be one-dimensional. Efficiency is not the only criterion.
“You can have a system that is incredibly efficient, but at the same time discriminates against certain groups, violates privacy, or creates dependencies that the organization cannot manage later. Ethical evaluation of AI is not hindering innovation – it is a prerequisite for innovation to bring real value and not just short-term profit at the cost of long-term damage.”
The biggest risk of AI? Loss of human judgment
Šucha offers a perspective from the standpoint of organizational culture and human capital. He warns that AI transformation, if not well managed, can erode precisely what makes an organization resilient and innovative – people’s ability to think creatively, collaborate, and take responsibility:
“The biggest risk of AI in organizations is not that it will replace jobs – that is happening and will continue to happen. The risk is that it will weaken human judgment: people’s ability to make decisions, take responsibility, and develop. If AI takes over not only routine tasks but also judgment, we will become dependent on systems we don’t understand and don’t control.”
So where to start – for a company and for an individual? The answers of all three respondents converge on one point: start small and specific. Not with the transformation of the entire organization, but with one process where AI brings meaningful benefit and where a responsible person can monitor the results.
Bešenyi concludes with a practical recommendation that reflects years of field implementations:
“The best AI implementations I’ve seen had one thing in common: they started with people, not technology. They asked teams what was holding them back, what frustrated them, what would save their time – and then they looked to see if AI could help precisely there. Not the other way around.”
Responsibility as a Competence
Ethical AI is not only addressed by philosophers or regulatory bodies. This practical competence must be acquired by managers, parents, teachers, and everyone who decides on the deployment of technology. It requires people to be able to ask uncomfortable questions before making a decision and to be willing to accept the answer “not yet” or “not like this.”
From the discussion, we derive three principles that will test our perseverance in practice: we must clearly define where decision-making remains with people, build critical thinking before we have to defend it, and measure real benefits instead of following trends.
AI will continue to grow and change our work and lives. The question is not whether we accept it, but whether we retain the ability to manage it according to what truly matters to us.
TEXT: Natália Stašíková
PHOTO: INOVATO, KInIT