Ethics and responsibility: how Ukrainian companies respond to AI challenges
29 July 15:24
Artificial intelligence is conquering, or rather, winning back from humans, more and more areas of life, causing users not only admiration but also sometimes fears and warnings. How to protect yourself from AI risks and implement artificial intelligence responsibly, "Komersant Ukrainian" found out.
A week ago, the Swiss company Proton launched a new AI assistant called Lumo, which is designed to maximize the protection of users’ personal data.
End-to-end encryption, decryption only on the user’s device, and the disappearance of the correspondence history automatically after closing the window are among the advantages of the new AI assistant.
“Lumo is based on open-source programming language models and operates from European Proton data centers. This provides much greater transparency of Lumo’s work than any other major AI assistant. Unlike Apple Intelligence and others, Lumo is not a partner of OpenAI or other American or Chinese AI companies, and your requests are never sent to third parties,” Proton explains the benefits of the new AI assistant.
In other words, the new product provides for a strict approach to privacy and is a direct response to one of the current AI challenges – ensuring transparency and protection of user data.
Ukrainian response to AI challenges
The penetration of artificial intelligence into all spheres of life not only opens up new opportunities but also carries risks: algorithmic bias, privacy violations, and lack of transparency of automated solutions. And businesses must respond to these challenges responsibly and proactively. This is how the Ukrainian software company MacPaw explained its participation in the creation of the first self-regulatory organization in the field of artificial intelligence in Ukraine.
In total, 14 Ukrainian companies have joined this initiative: Grammarly, MacPaw, LetsData, DroneUA, WINSTARS.AI, Gametree.me, YouScan.io, EVE.calls, Valtech, LUN, Yieldy, SoftServe, Uklon, Preply. In addition to the Memorandum of Understanding on the establishment of a self-regulatory organization, they signed the Voluntary Code of Conduct for the Ethical and Responsible Use of AI.
Volodymyr Kubytskyi, Head of AI at MacPaw, calls the signing of the Memorandum and the development of the Code a “strategic step towards the ethical development of artificial intelligence.”
“Previously, there was no association in the country that included companies committed to the ethical development of AI. Now such an association exists and can become an important partner of the state in shaping policy in this area. There was also a need for collective problem solving and experience sharing. Complex challenges are easier to solve together in a professional discussion format. Such interaction will contribute to the development of the industry, the formation of sustainable practices, and a culture of responsible AI use,” the expert emphasizes.
Companies that have signed the Code of Conduct for Ethical and Responsible Use of AI are obliged not only to implement its norms but also to submit reports to the Secretariat of the self-regulatory organization at least once a year. Its functions are performed by the IT Ukraine Association and the Center for Democracy and Rule of Law. Maria Shevchuk, Executive Director of the IT Ukraine Association, comments.
“This initiative is an example of how the Ukrainian IT community is ready to take responsibility for the development of artificial intelligence. We at IT Ukraine Association are pleased to join the process as part of the Secretariat and support the formation of ethical and clear rules in this dynamic area. We believe that self-regulation will become a solid foundation for trust, development, and international partnership in the field of AI,” said Maria Shevchuk.
Ethical guidelines for AI developers
The Code of Conduct for the Ethical and Responsible Use of Artificial Intelligence focuses not on strict requirements but on the values and principles that companies should rely on when implementing AI.
As explained by the IT Ukraine Association, this includes, in particular
- assessment and management of risks associated with the use of AI;
- ensuring security and resilience to external threats;
- protection of personal data and privacy;
- transparency in providing information on AI systems;
- control and, if necessary, human intervention in AI processes;
- respect for intellectual property rights;
- dissemination of knowledge about the ethical use of AI.
MacPaw also adds that the Code contains practical examples that help to better understand how these principles can be implemented in specific activities.
According to Volodymyr Kubytskyi, Head of AI at MacPaw, one of the biggest risks associated with AI implementation is privacy.
“People are increasingly trusting AI with their personal stories, thoughts, and preferences, and they do not always understand what consequences this may have. The problem is not only malicious intentions, but rather the fact that everything that enters the system can theoretically be stored, transmitted, and retrieved somewhere. It’s even more complicated when the system accumulates a deep understanding of a person: their psychological profile, reactions, and weaknesses. Against this backdrop, there is a potential for manipulation, even imperceptible manipulation. How to work with this: the main thing is that the user has full and real control. You need to understand what the system knows about you and be able to delete or restrict it,” says Volodymyr Kubytskyi.
MacPaw is currently actively working on its AI assistant Eney for macOS. Eney performs routine interaction with the system and makes it easier, more efficient, and faster. Volodymyr Kubytskyi explained how privacy is achieved using this development as an example.
“The company pays a lot of attention to how to ensure users’ privacy and control over personal data. For example, we are thinking about how to implement a “right to be forgotten” approach in Eney so that the user can completely delete the information that the system has stored about him or her. This is still in the process, but the approach itself is basic and ethical by default for us. We don’t want the system to know more than it really needs to, and we definitely don’t want it to store anything “just in case.” For us, this is not only a technical issue, but also an ideological one. It is with this approach that we want to develop our AI tools in the future,” the expert emphasizes.
AI adaptation to European standards
The creation of the first self-regulatory organization in Ukraine in the field of artificial intelligence is not only a response to national challenges but also an important positioning of Ukraine as a responsible participant of the global AI environment, IT Ukraine Association emphasizes.
“The G7 countries, the European Union, the UK, Canada, Japan, as well as the world’s leading companies Google, Meta, Amazon, and Microsoft are already developing and implementing ethical codes, standards, independent audits, and self-reporting policies. In this context, the initiative of Ukrainian companies to create the first organization for the ethical and responsible use of AI in Ukraine is a timely and strategically important step,” said Maria Shevchuk, Executive Director of IT Ukraine Association.
Volodymyr Kubytskyi, Head of AI at MacPaw, also believes that such an initiative is a way to prepare for European regulation in the AI field.
“It is important for businesses to direct their activities in accordance with the future rules, forming internal policies that will be compatible with future legislative requirements,” emphasized Volodymyr Kubytskyi.
This is all the more meaningful since the EU requirements in the field of AI regulation are recognized as quite strict. At least, Meta has recently announced that it will not join the voluntary Code of Practice for Artificial Intelligence developed by the European Commission on the eve of the AI Act coming into force. According to TechCrunch, citing a post on LinkedIn by Meta Vice President of Global Affairs Joel Kaplan, the company’s decision is explained by the fact that the European approach to AI regulation is excessively strict and contradicts the interests of developers.
The European AI Act will come into force on August 2, 2025. The risk-based AI law explicitly prohibits some of the most dangerous practices, such as cognitive-behavioral manipulation or social evaluation. The rules also define “high-risk” areas of AI applications, such as biometrics and facial recognition, as well as the use of AI in education and employment. The law requires developers to register AI systems and fulfill risk and quality management obligations.
Author: Sergiy Vasilevich