Digital Works Podcast

Episode 045 - Ash & Katie, Bytes #7 - a conversation about advances and problems with AI, YouTube CEO's predictions for 2024, and shifts in how the web is monetised

Digital Works Season 1 Episode 45

The seventh in our new series, Bytes, where Ash and Katie discuss 3 things from the latest Digital Works Newsletter.

In this episode we'll be discussing some of the links in the newsletter that went out on February 26th.

We talk about:

  • The launch of OpenAI's new text-to-video product, Sora. The apparent meltdown that ChatGPT had last week, and the BBC's AI Principles
  • We also talk about YouTube CEO Neal Mohan's predictions for 2024
  • And last but not least we'll look at an article in Engadget about the Browser Company's new product, titled "Who makes money when AI reads the internet for us?"

I realise that's more than three things, but broadly it's AI news, YouTube predictions, and considerations about how the web is monetised.

You can sign up for the newsletter at thedigital.works.

Speaker 1:

Hello and welcome to the Digital Works podcast, the podcast about digital stuff in the cultural sector. My name's Ash, and today's episode is episode 7 of Bites, which is our regular short form series where we look at three interesting things from our most recent Digital Works newsletter. You can sign up for the newsletter on our website, which you can find at thedigitalworks, and joining me today and for all episodes in this series is the person who puts the newsletter together, my colleague Katie. Today we'll be discussing things from the newsletter which went out on Monday, the 26th of February 2024, and I'll put a link to that in the show notes for this episode. Hi, katie.

Speaker 1:

Hello, the three things that we're going to look at today are the launch of OpenAI's new text to video product, which they're calling Sora. We'll also look at the apparent meltdown that chat GPT had last week and we'll examine the BBC's AI principles that they've published. We also want to talk about YouTube CEO Neil Mern's predictions for 2024 and, last but not least, we will look at an article in Engadget about the browser company's new product. The article is titled who Makes Money when AI Read the Internet For Us? Now I realize that's more than three things, but broadly it's AI news, youtube predictions and considerations about how the web is monetized.

Speaker 1:

So, first up, we are going to look at three things at the same time, because they feel linked to each other. So let's start with the launch of Sora, which is OpenAI's latest product, and it allows you to create, as they say, realistic and imaginative scenes from text instructions. But in the newsletter, you included a quote from Gary Marcus, which said the system is trying to approximate the world, but it just isn't very good at that job. It uses arrangements of pixels to predict other arrangements of pixels, but it does not try to build an internal model of physics and it does not try to build an internal model of cultural artifacts either. Now I've also read articles that have highlighted the potentially terrifying effects, particularly around disinformation, that this new tool could create, particularly with really important elections upcoming this year. So what's your perspective on this? The latest leap forward with AI generated functionality.

Speaker 2:

I mean. Well, straight off the bat. It's an incredible development, isn't it? To be able to create fully rendered video from a sentence to sentences is astonishing. Of course, as Gary Marcus was pointing out, it's not perfect at all, a bit like images that are created using generative AI. There's always a slight weirdness with them at the moment, but of course, that's going to develop and it's going to get better so quickly.

Speaker 2:

I think there's a few different strands to this, isn't there? There is the question of how people might use it in negative ways, as you're saying, whether that's disinformation, etc. There is clearly a real issue with that. I saw this TikTok yesterday, which is a young woman, and she's essentially saying I created an AI video of myself and you think it's going to be a stitch. So you think it's going to stitch into the AI, but obviously the big da-da is that actually the video is AI. So she used a generative AI audio tool and video tool, stitched it together, and you would literally not know that it wasn't her.

Speaker 2:

When you look at other videos on her account, it's astonishing. So there's that. I mean, who knows what's going to happen? It's going to be interesting, particularly with the election. On the other side of things, though, from a creative perspective, I think it will be interesting to see what and how organisations do with this technology for things like creating that kind of ephemeral throwaway content for social media, short video, etc. So it's going to have huge ramifications for sure, both for good and not so good.

Speaker 1:

I mean, I suppose on the subject of Gary Marcus's point that these tools are fallible, I want to quickly look at the fact that chat cheapy tea had a bit of a meltdown last week and was essentially talking gibberish Katie links and really lovely examples in the newsletter, and also in the newsletter you highlighted that a Canada is paying the price for its customer service AI chat bot giving a user incorrect information. And just this week In the news in the UK there was a new story about a Willy Wonka experience that was advertised using all AI generated assets and was, put it mildly, a bit of a disaster, and I think that, whilst absolutely sore, is an amazing step forward and once these tools are released, the improvement and the speed of improvement is equally astonishing. We also need to be mindful that they are fallible. What do the what is this little cluster of stories maybe tell us about the reliability of these types of tools at this stage in their development?

Speaker 2:

I mean it's a solitary reminder that they are not pulling factual information from a database, that they are Creating text images, whatever based on predictions. I shouldn't laugh really, probably, because there are some I guess you know serious consequences of these sorts of tools going haywire, as the Canada example showed. So that for those who haven't read it, basically, yeah, they're driven chat but effectively told a customer that they were eligible for a refund when they in fact weren't, and the courts upheld that because the customer basically fought it. When Canada said no, you're not sorry, the chat box gone crazy. So that that's that. That's the kind of really serious Issue for a company has a financial impact.

Speaker 2:

But the article about the fact that chat GPT started speaking gibberish just made me laugh because there's a sort of element of the real absurd about it where it just started saying completely bizarre things and I mean, you know, objectively that's quite funny. But again, it's just a solitary reminder that it's not facts. It is not facts. So if you're using these tools To help you with writing reports or doing admin tasks to do with work, just be aware that you know you can't necessarily rely on it. To be truthful, yeah, it's a. It is quite funny though, because I just wonder if you know it was having an off day. Anyway, they've open a. I fixed it quite quickly, so you know.

Speaker 1:

I mean, yeah, it was, it was surreal, it was like it was drunk, which was quite enjoyable.

Speaker 1:

But you know, also in the newsletter you linked to an article which highlights a number of, you know, really practical ways in which I could be embedded into people's work flows, you know today.

Speaker 1:

But the last thing I want to talk about in this section is related to that, because it feels like, based on everything we've just talked about, if you are going to be embracing a itools, a working, then needs to be some scaffold around that to, you know, to sort of the risk, a lot of the unreliability that we've just talked about and you shared, I think, a really nice example of the BBC's I principles, which I'll just read them out the sort of headlines. They say we will act in the best interest of the public, we will prioritize talent and creativity, we will be open and transparent, and this covers things like making it clear that there will be human oversight, that they will consider and respect privacy and intellectual property rights. And the NEI work has to adhere to the BBC's editorial guidelines and values. And would you say that cultural organizations should be thinking about something similar, should be thinking about drawing up their own Equivalence of guiding principles. At least if they don't feel they've got enough information or expertise to draw up full policies, are principles at this stage useful?

Speaker 2:

Yes, 100%, I would say yes. A few issues back, we shared some examples of Actual guidelines that organizations had written. So there was one from the civil service. There was another sample, a policy that actually been written by a software company and, as you say, it may be that Cultural organizations are not quite at the stage where they feel ready to write those types of policies.

Speaker 2:

I will predict right here and now that within 12 months, there will be some kind of issue that will come out From a cultural organization because of some use of AI that has had a negative consequence, either because of using it for creative purposes and some kind of ethical issue, or, you know, perhaps you know factual stuff or some other thing.

Speaker 2:

I just think it's inevitable really, and the problem is that a lot of Organizations they're not even clear what their staff may or may not be using. We did some work recently with a medium sized cultural organization to do a. It was actually a kind of piece of skills, digital capacity building work, but as part of that we run a big survey and one of the things we asked is you know, what tools are you currently using when it comes to AI? And there was a huge range of them, and you know there's no guard rails there about what is being done with the stuff that's being generated, etc. So, yeah, I think, at the very least find out what you stuff using now and then come up with just some basic principles and Just keep on the table in terms of a more in depth policy, for sure.

Speaker 1:

Our second thing is the open letter that you tube CEO Neil moon published couple of weeks ago, which outlined for big bets for 2024. Now, all four of them are quite you tube or content centric, as you might expect, but I'll go through them quickly. Number one is AI will empower human creativity. Number two is creators should be recognized as the next generation studios. And number three is you tubes next frontier is the living room and subscriptions. And number four is protecting the creator economy is foundational. Now, the first two there about maybe sort of Like, I see, and perhaps try, but three and four strike me as interesting. What's your sense of what this might tell us about you tubes focus and priorities, or perhaps the wider Content context that you tube invariably informs?

Speaker 2:

yeah, it is really interesting. As you say, it's kind of. On the one hand, it's quite easy to dismiss these as sort of well, a bit so what? But I think the point about YouTube they describe it as the next frontier is the living room subscriptions.

Speaker 2:

There's an interesting point in here and I think it's largely a generational one, that those of us who are old enough to remember pre-YouTube would absolutely have a distinction between things that you watch on the television and even streaming services like Netflix etc. And YouTube, which originally was sort of a social media platform and then it was a content platform, and now it's all those things. And he's essentially saying YouTube has become, is becoming no different to Netflix, and certainly generationally, if you speak to Gen Z, they will absolutely have that view that there's no difference. They watch things interchangeably, it's all content. There's something in all of this which is a longer discussion for a different day about how originally YouTube was about democratising, who has a voice and who gets to reach audiences, and that's still the case.

Speaker 2:

But ironically, perhaps, as YouTube becomes more of a we are a content provider, we'll watch our content. Actually, those bigger content creators will rise to the top and perhaps we sort of then end up in a situation where you have fewer but bigger content providers rather than that sort of notion of niche content. But I think that that needs on picking a bit more. And he's other point about protecting the creator economy, which is essentially comes back to the sort of AI issue, as well as online safeguarding for kids.

Speaker 2:

The fact that they're saying, in the context of the sort of election year, that they're going to be introducing labels that will sort of flag this is AI generated content is particularly interesting when viewed against the lens of X and Elon Musk and the fact that it's just fair game on there. Who knows what X is going to be like as we get nearer the US elections. In terms of content, I personally find it quite what's the word. I find it fairly comforting that YouTube are at least trying to push back against some of the disinformation, but it's going to be hard work for sure.

Speaker 1:

Yeah, but I think it is, as you say, encouraging that A CEO is saying things like that, rather than a CEO like Elon Musk, who says different things. And our final thing is the article in Engadget about ARC search which is titled who Makes Money when AI? Read the Internet for Us? And you said in the newsletter, while all of this is great from a user experience point of view, it has huge ramifications for the future of the internet. So what is ARC search? Who is making money? Why does it have huge ramifications for the future of the internet?

Speaker 2:

So ARC search is an iPhone app.

Speaker 2:

There is also a desktop browser version, and essentially within it it has a feature called Browse for me, and so if you want to find something that you would traditionally use like, let's say, google search, what it does is it uses AI to read and summarize web pages into a kind of formatted page, and that means, in theory, you don't then need to visit those pages as a user, and of course, that means that you're not seeing the ads or you're not getting the cookies and the trackers.

Speaker 2:

And so, while that is seen as being a really positive thing for users and this particular product has had huge amounts of positive press online lots of people saying it's fantastic from a user experience perspective Of course what that means is that we could end up in a situation where there is a huge reduction in the number of sites being visited and therefore current kind of content creators online are not able to pay for that if they're using an ad supported model, and so then we kind of are going down a corridor of, in theory, less and less choice online. It's a very tricky thing to unpick, really, because I can absolutely see both sides of the argument.

Speaker 1:

It is a massive shift, and I've commissioned a bunch of essays to accompany the Digital Works Conference, one of which is from StoryThings Matlock, and he sort of writes about this very issue. He describes it as sort of the third age of the internet and that the search and social driven dynamics of the past 20 years are being completely restructured by these AI driven technologies and actually search is going to play a different role, social is going to play a different role in. When I say different role, I mean in content discovery, and I think it's going to be really interesting for cultural organizations, specifically, to then think about the purpose of their institutional website, to think about the role in which that's going to play a different role. These types of AI driven services sit in the broader sort of mix of channels that they're active on, because I think what is undeniable is that this is a sea change, and I think it's about working out your place in all of that or Radically rethinking your digital activity and sort of removing yourself from it.

Speaker 1:

I was reading about a tech news platform and they put everything behind a subscriber wall now and you can sign up for a free account. You don't have to pay for all access to all of their content, but you do have to create an account, and that is because they're trying to build direct relationships with their readers and they're trying to push back against AI tools scraping their content, and so I do think there are going to be some interesting choices for cultural organizations to make over, let's say, the next five years, because the role of institutional websites, the role of social media, the role of search is sort of shifting under our feet.

Speaker 2:

Hmm, absolutely, and, again, there's a lot to unpick in all of that in terms of how it relates to arts and culture Organizations. There's probably, there's definitely a blog post. Just in that, I suppose the positive for a cultural organizations is, from a search perspective, in in terms of users who are looking for practical information about venues. In a way, you could argue, it doesn't really matter if a user doesn't visit your site to find out about, about parking or access needs, as long as the information that they're getting summarized for them is accurate, and If a user has to go to your site to buy a ticket, also fine. There is, though, I would say, questions about your wider content strategy around blog posts and other content that is Created by your organization to do different things. I think that definitely needs consideration through the lens of these sorts of Developments definitely.

Speaker 1:

So that is all three. Well, actually six or seven things for this week. I'm not sure if we're gonna do another one of these before the conference, so hopefully see some of you at the conference. The issue of ethics and power and all of that that it was very much come through and everything we've discussed today and Katie is chairing a panel discussion on considerations around Ethics in digital working got some really fantastic speakers joining that panel, so hopefully see some of you in leads in April. If not, we will be back again soon.

Speaker 2:

See you then.

Speaker 1:

Thanks for listening to this episode of bites. You can find all episodes of the podcast on our website at the digital dot works, where you can also find more information about our events and sign up to the newsletter. Our theme tune is Vienna, beat by blue dot sessions. And, last but not least, thanks to Mark cotton for his editing support on this episode. You.

People on this episode