Can AI Have "Ethics" Without Human Bias?

I’ve been reading up on AI ethics lately, and one thing that really stands out is how tricky it is to build “ethical” AI that doesn’t just reflect the biases of its creators. We want these systems to be fair, but since they learn from human data, they end up inheriting all our flaws, too.

For example, AI used in hiring might end up favoring certain demographics simply because the data it's trained on has historical biases baked in. So, even if the AI isn't trying to be biased, it can still make some really unfair decisions.

But this got me thinking: Is it even possible to create an AI system that can make ethical decisions without leaning on some kind of human bias? And if not, how do we make sure these systems are at least less biased than humans, instead of just amplifying existing issues?