The Principles of Ethical AI Design.
- matthew88236
- Feb 3
- 3 min read

When the Chinese company DeepSeek released its Chatbot app - DeepSeek-R1 model - last week, there was a flurry of excitement (both good and bad). Despite sanctions on China that make the use of Nvidia chips nearly impossible, the company has managed to build a service that has sent the overall AI world into a chaotic flurry. This represents the first huge disruption within a disruptive market. Furthermore, DeepSeek open-sourced its code.
DeepSeek’s primary servers are located in the People’s Republic of China. All data collected is secured within those servers in China. This includes user inputs, device information, and usage patterns. Because all companies, in China, must disclose their data to the government, this has raised alarm in European countries. In fact, the app has already been banned in some countries like Italy. The trained models are using censored data which creates biased results. This post on LinkedIn by Rene Bystron is worth a read: https://www.linkedin.com/feed/update/urn:li:activity:7290053622787174400/
Perplexity AI, a San Francisco company, has integrated DeepSeek’s models into its platform. You can access the DeepSeek-R1 model through its services, however you have to purchase Perplexity Pro to access the model. However, because they are self-hosting/self-managing, they can ensure that user data remains within its controlled infrastructure. Even there, it is being reported that the model is showing CCP bias.
Why am I laying all of this up and why am I “picking” on DeepSeek? Mostly because I think it provides a pretty stark example of non-ethical design of AI tools. It can become a laboratory for us to do better.
“Simply AI” ISBN 978-0-5938-4705-3 published by Penguin Random House writes,
“As AIs become ever more intelligent, the question of how to ensure that they behave ethically becomes increasingly important. Machine-learning tools have neither agency nor values, and so cannot be relied upon to offer suggestions that are in the best interests of humanity, or do not favor one social group over another. They only way to ensure that AIs think ethically is to program them with ethical principles, although then the question becomes: whose ethics? Ideally, an AI should have equal respect for all humans, and be able to detect and compensate for bias.”
There are three principles in Ethical AI design.1) Transparency2) Privacy3) Fairness (or bias) DeepSeek violates two of the three main principles in ethical AI design.
✅ TRANSPARENCY - As DeepSeek has open-sourced its generative AI algorithms, models, and training details, it is allowing its code to be freely available for use, modification, and viewing. This appears to fall on the ethical right side of the equation.Unethical design would make decision making opaque, not allowing people to see why or how the decision was made.
Ethical design allows us to understand and judge whether the decision is correct or not.
❌ PRIVACY - DeepSeek, by Chinese law, must disclose data that people would generally like to keep private, to the Chinese government.Unethical design does not allow individuals to control their data. They don’t know who can use it, who can see it, or how it is used.
Ethical design keeps personal data private and allows the user to retain control who can use it and what is seen.
❌ FAIRNESS - DeepSeek has bias designed into the algorithm as is demonstrated in Rene Bystron’s post. The version hosted in China applies content restrictions that follow local regulations. This skews responses on certain topics.
Ethical design seeks to remove bias from responses.
I want to be clear, that bias exists within almost model - this isn’t just a DeepSeek issue. That bias is based on who is training the model. But it behooves us remove as much bias as possible when creating these systems.
The reason I chose to talk about DeepSeek in this post is simple. The company has shown un-nuanced examples of both good and bad behaviour within the three principles. It does a stellar job on Transparency but falls flat on both Privacy and Fairness. With a lack of regulation in this space, it is up to us (the creators), to do the right thing.
Comentários