Guide: Use AI Responsibly and Ethically

  • by
Guide: Use AI Responsibly and Ethically

In this article, you’ll learn about the legal, ethical, and social issues being created by modern AI. In addition, you’ll learn how you can use AI responsibly and ethically to the benefit of our whole society.

With great power come great responsibility. During the previous technological revolutions, humans gained access to new super-powers. In many cases, these new super-powers were abused – often to the detriment of many less-fortunate individuals.

Agricultural societies gained the ability to raise large armies. They often misused this new superpower to acquire new land, enslave human labor, and to subjugate entire civilizations. Industrial societies gained the ability to mechanize warfare. They misused this superpower to control resources, maintain colonies, and expand their political influence.

Information societies gained the ability to wage cyber warfare. We’ve definitely misused this new superpower with mass surveillance, industrial sabotage, and election tampering. In the AI revolution the situation will likely be more of the same. We are going to discover some amazing new superpowers in the next few years.

However, there will be some people who will try to use these new powers for their own self-interest and personal agendas. With technology revolutions, we are like children getting our hands on a sharp object for the first time. We typically don’t learn our lesson until we’ve cut ourselves a few times.

The difference with AI is that we might not get a second chance to learn from our mistakes. With great power comes great responsibility, and with AI, we have the greatest responsibility that humankind has ever known. The emergence of modern AI has lead to some rather interesting legal, ethical, and social issues in recent years.

For example:

  • We now have facial-recognition systems throughout our cities that may be violating our right to privacy.
  • We have AI-generated advertisements that use a consumer’s behavioral profile to manipulate their purchase decisions.
  • We have text-generation software (like GPT-3) which can generate propaganda and “fake news” on an unprecedented scale.
  • We have deep-fake technology that can be used to impersonate politicians, celebrities, and executives for nefarious purposes.
  • We have deep-nude technology that can digitally remove a person’s clothing without their consent and has been used for blackmail and exploitation.
  • And we have semi-autonomous weapons that are very close to becoming fully autonomous weapons.

These are just a few of the current legal, ethical, and social issues that we’re now facing with modern AI.

However, there are much more advanced and sophisticated AI technologies just over the horizon. Given this, the number and severity of these ethical issues is likely to increase significantly in the near future.

To put it simply, we’re going to have some very difficult ethical issues to deal with in our lifetimes.

For example:

What does privacy mean in a world with constant and pervasive AI surveillance? We currently have very little privacy and we’re about to get a lot less.

How do we avoid bias and discrimination in our AI models? It’s easy to create biased AI models that will directly impact the lives of millions of people.

Even more concerning, should we allow AI to be weaponized?

Should we ban fully autonomous weapons before we enter a new AI arms race?

And if we do, how long until a conflict pressures a government to override this directive?

How should we allocate resources in a post-human-labor world?

Some economists suggest that we will need a Guaranteed Basic Income, a Negative Income Tax, or a social stipend.

But how do we pay for that?

Should we tax robots in order to offset the inevitable unemployment from automation?

How do we even begin to determine the true value of each robots’ labor in order to tax their labor appropriately?

And if we tax robots, what rights should they have in our society?

I mean, we’ve have fought wars over the idea of “No taxation without representation”.

What will the machines, or the capitalists that own them, demand in return for paying the bulk of all taxes in the world?

It’s pretty clear that we have a lot of ethical questions that we need to answer in the coming decades. The most important of these ethical questions though is:

“What does this all mean for humanity?”

What is our purpose in a world where machines do all the work of real economic value?

Does this technology set us free?

Or does humanity eventually become obsolete?

How do you use AI responsibly and ethically?

What should you be doing today to ensure that we don’t misuse our new AI superpowers?

First, start asking the difficult questions now.

If you haven’t spent any time thinking about any of the ethical questions I just posed, then you’re probably not prepared for what is rapidly approaching. You don’t need to spend all day philosophizing about the legal, ethical, and political implications of AI. We’ll leave that up to the lawyers, philosophers, and politicians.

However, you do need to be thinking about what your core values are and whether they agree with or conflict with these new AI-created dilemmas. Each of us needs to know where we stand on these key issues before they take us by surprise.

Second, avoid bias in your AI models.

It’s very easy to accidentally (or intentionally) create bias in your AI models. If you train a model with biased data, you get biased results. As the old saying goes: Garbage in, garbage out. This creates feedback loops that can reinforce and amplify existing socio-economic divisions in our society.

This will be especially true when these algorithms begin to impact the lives of millions of people. Third, provide transparency in your AI models. If you automate a decision with AI, you should make the decision-making process as transparent as possible. If AI becomes a “magic black box” that cannot be questioned, then how can anyone get any recourse when it makes an incorrect decision.

As a result, I recommend that you always use the simplest AI tool that effectively solves a given problem. Don’t use a complex deep neural network if a simple decision-tree classifier will suffice. And when possible, use explainable AI tools. Explainable AI provides diagnostic explanations for how and why a specific decision was made.

Ultimately, ask yourself, could you explain to a judge how this AI model made it’s decisions? If you can’t explain it, or the judge wouldn’t be able to understand your answer, then it’s not transparent.

Fourth, protect your private data from abuse.

If people knew what I could do with their personal data and the right algorithm, they’d probably be much more cautious. However, the general public is currently quite a few steps behind data scientists in this domain. Think about what data you are willing to make public and protect everything that you want to remain private. Only entrust your personal data to organizations that you trust can-and-will protect your privacy.

Finally, demand more from our leaders.

We have several AI-related legal, ethical, and social issues that we need to address in the very near future. Unfortunately, many of our politicians have very little understanding of AI and thus are unable to make effective public-policy decisions.

You need to choose representatives that understand AI and how it can be either a benefit or a detriment
to our future society. In addition, we need to choose the best corporations to lead our society in the right direction. You vote for these corporations every time you spend your money on their products and services.

Vote for them with your dollars and hold them accountable if and when they fail us.

To recap our fourth recommendation:

  • Use AI responsibly and ethically.
  • Start asking the difficult questions now,
  • Avoid bias in your AI models,
  • Provide transparency with your models’ predictions,
  • Protect your private data from abuse,
  • and Demand more from our leaders.
Share this post ...

Leave a Reply

Your email address will not be published. Required fields are marked *