Can ChatGPT write a YouTube script?

Worried that ChatGPT could do a better job at writing one of the scripts for her videos, YouTuber Jill Bearup set it a task that’s typical for her channel: commenting on and rating the boots worn by female action heroes. The results were practical and functional and stylish and practical.

I liked how the ChatGPT-generated script made sense in certain ways but also revealed such hollowness, especially as it went on.

On campaigns against disinformation

I appreciate that Tablet Magazine did this deep dive into government-backed initiatives against disinformation, and how they often just result in more disinformation: propaganda, a manipulation of narratives, a system ensuring that certain facts never surface and that reasonable but uncomfortable questions remain unheard and unaddressed.

One excerpt:

The first phase of the information war was marked by distinctively human displays of incompetence and brute-force intimidation. But the next stage, already underway, is being carried out through both scalable processes of artificial intelligence and algorithmic pre-censorship that are invisibly encoded into the infrastructure of the internet, where they can alter the perceptions of billions of people.

It’s important to make time to read this article and better understand the ways that technology is wielded by powerful entities (governments collaborating with huge corporations) to shape us – what we know, what we think, how we think.

Two types of AI scams

Artificial intelligence is a social disruptor. Although it may deliver benefits across different industries and in private life, it’s also a potential weapon, and there are already scammers taking advantage of it.

In one type of scam, criminals use AI voice-generating software to imitate your loved ones and pretend they’re in distress, maybe suffering a medical or legal emergency. Their goal is to scare you and get you to send money.

Another type of scam uses AI-generated artwork to convince you to donate to what you think is a legitimate charitable cause, like disaster relief for earthquake victims. The images stir up emotions and prompt you to act quickly. But the money just goes to scammers.

Steven Pinker’s Views on Chat GPT

It’s worth reading this piece in the Harvard Gazette, where Pinker gets asked if he thinks that AI is going to supplant human creative and intellectual endeavors.

Overall, he sounds pretty optimistic (though maybe he’s downplaying the shakeup that many will experience as AI advances), but I do want to highlight one part that struck true to me. He points out that one pushback against AI is the need for people to connect with people:

The demand for authenticity is even stronger for intellectual products like stories and editorials: The awareness that there’s a real human you can connect it to changes its status and its acceptability.

The Limitations of Artificial Intelligence…

… and what they reveal about human limitations and strengths. Two quick examples:

Watch this video, which focuses on a picture book while asking important questions about how our brains work vs. how AI works. At what age will a young child understand what happened to the thieving rabbit? Can AI understand the story’s shocking conclusion?

And consider this recent article from CNET on the biases in algorithms (a topic I posted about before). People sometimes think that AI-based decisions will somehow be objective, free from biases and errors in judgment. But what data do algorithms get trained on? And who gets to say what’s a fair AI decision and what’s not?

Concerns About AI Bias Aren’t Just “PC-Ness”

People often need to frame things as a battle between two forces (“libs” vs. “conservatives,” or “SJWs” vs. “anti-SJWs”). Any concerns or opinions mentioned predominantly by one side will get automatically shot down by the other.

I’m seeing these kind of knee-jerk responses in conversations about algorithms trained to make predictions about individuals. Depending on where the algorithm is used, these predictions can affect anything from the health care you receive to whether you’re hired for a job.

One example is a medical algorithm that was significantly more likely to recommend special health care programs for white patients than black patients who were equally sick. The factor that shaped the decision-making in this case wasn’t even race, at least not directly. From a recent MIT Technology Review article on this issue:

howbiascreptin

One of the remarks I regularly hear (and read) about this topic is that these algorithms are upsetting people because they reflect “facts not feelings” and that “facts don’t lie.” Ok, maybe facts don’t lie, but what do they actually reflect? What datasets are you training these algorithms on, and what do the data really tell you about people? (Not just groups of people, but individuals who are on the receiving end of these predictions.) The fact that members of one group may have historically been more likely to receive worse health care, on average, than members of another group doesn’t mean we need to perpetuate the problem.

The biases produced by these algorithms – biases which may be based on class, income, race, sex, or other dimensions – don’t necessarily reflect unchanging truths about human nature or social problems we can never address. So it’s disheartening to see people crow about how decisions based on algorithms are reflecting the “real truth” underneath the layers of PC-ness we’re festooned with as a society.

The medical algorithm mentioned in this post was examined, and the problem got addressed. In many other cases, we don’t know why algorithms are making predictions or decisions in certain ways. We don’t know what data they’ve been trained on, and companies are keeping quiet about it. There may be little accountability or option to appeal a decision. This is a critical issue to discuss, while hopefully minimizing the knee-jerk responses and the thought-terminating clichés (chants of “facts not feelings” from people who are also acting emotionally about this issue, though they don’t recognize their satisfaction or delight as feelings).