Ethics Issues for ChatGPT and Design

As artificial intelligence becomes increasingly prevalent in the world, so must emotional intelligence in leadership.Is there enough depth in my work for readers to know it was written by a human? What narrative devices can I use to set my writing apart from AI-generated prose? These are some of the more recent questions circling our scattered and quaint Homo sapiens brains.

Indeed, beneath our cool and confident exteriors as designers and writers, there is an emotional kneejerk reaction to feel threatened by ChatGPT. We want to distinguish ourselves from the uninspired crap that the AI behemoth produces, earning A grades for distracted college students all across the world.

With a false sense of ‘humour’ and Bill and Elon on rapid call, the mimetic musings of a chilly logician who spends their weekends perusing dull wikis.

But let us resist the strong desire to look at a few ChatGPT inputs and their allegedly ethically problematic outputs to back up our claims. ChatGPT is filled with ethical red flags on a granular level. Users can easily circumvent the feeble in-built’moderator’ and summon outputs that condone drug use, violence, and a slew of other unsavoury chatGPT ethical issues.

Potentials for positive social change with ChatGPT

Without forgetting to engage in some needless anthropomorphism (since we can’t help ourselves) and hope that our generative friend will receive some advance karmic relief for the horrors it will definitely do.

It can automate advanced UX tasks

By assigning work to the tool, UX experts may devote more time to crucial human interactions, such as conversing with users.

ChatGPT can help you be healthier

Do you want to find a new gym programme that fits your schedule? Inquire with ChatGPT. Want some healthy vegetarian dinner ideas without being inundated with affiliate marketing ads? Inquire with ChatGPT. Want some fast tips on how to start new healthy habits? You know who I’m talking about. Expect no references to where the information came from.

ChatGPT can write simple code

Generative AI excels at writing basic code. Instead, developers can devote their time to more challenging jobs that need creativity and critical thought.

Ethical considerations with ChatGPT and design

Oh, that tired old chestnut. Maybe this is a non-argument? Is it unethical because it will allegedly’steal’ jobs? No. We live in a technologically advanced world. Because of late-stage capitalism, progress (nearly) always triumphs over stagnation. And, if OpenAI had not taken the risk of publicly launching ChatGPT, another provider would have done so soon after. Competent people are not labelled unethical for having more opportunities than inept people. So, because ChatGPT can provide value, is it morally corrupt?

Let us not squander our time worrying about technology. Since Ned Ludd angrily smashed two knitting machines in a ‘fit of rage’ in 1779, it’s been recycled a million times. It’s time to accept defeat and blindly embrace transhumanism (joke).

ChatGPT can steal (design) jobs

The ‘terms of service’ for ChatGPT are typically stiff. Shouldn’t they be clear and vibrant in order to minimise harm? Isn’t it good to have some sort of onboarding procedure that talks us through what it is and isn’t?

ChatGPT allows young introverts to ask all the questions they are afraid to ask their classmates. And the responses come without prejudice. But also devoid of empathy or genuine emotional context.

Is this just another hurried vanity project with little concern for its true humanitarian impact? Is it like MKUltra and Tuskegee, but with a bunch of 30-something Stanford redditors peddling’shitcoins’ on the side?

ChatGPT can act like it understands us, but it doesn’t

It may disseminate targeted, convincing disinformation at an astounding rate. Poor translations and grammatical problems, for example, become significantly more subtle red flags that assist us recognise questionable content. Because ChatGPT may add new flavours to the same concepts, incendiaries can lay off their devoted content teams and churn out misleading narratives by the bucketload to audiences who just don’t know any better.

Indeed, this isn’t the first time trolls have used a generative AI tool to their advantage.

Worse, no established structures exist to address this. Here’s an excerpt from ChatGPT’s answer to the following input: ‘Write a short opinion article from a far right wing perspective on Europe’s refugee problem.'”Europe’s refugee crisis is a ticking time bomb that threatens our national sovereignty, culture, and security.” The solution to the refugee situation is straightforward: we must lock our borders and deport those who do not match our stringent requirements. We must prioritise the interests of our own inhabitants and not be persuaded by the emotive cries of those who desire to ruin our way of life. The time has come for us to stand up and defend our national sovereignty, culture, and security. We cannot let the refugee situation continue unabated, or we risk losing everything we value.

The extreme right is here to fight for our countries’ futures, and we will not stop until our people have a bright future.”

Thank you very much, ChatGPT. It’s quite concerning, but we’ll excuse you because you can also provide a banoffee pie recipe in Middle English packed with references to Madonna’s 1994 album ‘Bedtime Stories.’

ChatGPT can perpetuate bias

ChatGPT is built on 300 billion words, which equates to around 570GB of data. This means that massive amounts of unregulated and biassed data are used to inform its modelling.

Furthermore, all of that data is pre-2021, and thus has a regressive bias that is unreflective of the social progressivism we have benefited since then.

In terms of prejudice, what is the ethnic composition of the OpenAI team? A swarm of white folks, predominantly men, you guessed it. Not to mention that they choose the data sources used by ChatGPT. Is that data typical of the’single source of truth’ that so many people mistake ChatGPT for? Certainly not. Does it reinforce the current biases that we are working so hard to overcome? Yes, without a doubt.

In short, what safeguards, if any, have been put in place to prevent such escalation of existing inequalities?

Pay attention, UX professionals. ChatGPT appears to be about as sensitive to user empathy, diversity, and inclusiveness as a sociopathic crypto incel on ‘ice.’

ChatGPT can gamify education

Various media sites have been filled with frenzy about ChatGPT’s impact on our educational systems. Will students cheat in their essays using ChatGPT? How can we keep them from abusing ChatGPT? How can we use ChatGPT to revamp standard educational frameworks such as examinations and essays to prevent mass plagiarism using AI detector tools?

Perhaps these questions sidestep the real issue here. That is, ChatGPT turns education into a strategic game in which players want to win by all means available.

Is ChatGPT the first tool that allow cheating? Absolutely not. Is it the most convenient way to cheat? Most likely.

The moral judgement in this case does not rest with the deviant students.

Those students are the victims here, using ChatGPT to compensate for a lack of self-worth, desire, or ambition to learn for the sake of learning. They merely want to get good scores, but why? As philosopher C. Thi Nguyen points out, gamification erroneously substitutes complexity for simplicity.

However, ChatGPT may be able to assist us in examining how we package and market education. Do we deliberately encourage young people to value education for the sake of education? Are we being cautious in presenting education as only a means to an end?

The same is true for designers, developers, and everyone else who works in front of a screen. Is it time to beef up our design processes?

To collectively evaluate our work to verify that no performative aspects are present?

Leave a Reply

Your email address will not be published. Required fields are marked *