Digital Works Podcast

Episode 036 - Ash & Katie, Bytes #3 - Exploring Twitter's transformation (/disintegration), Advances in AI, and the new Web Accessibility guidelines

Season 1 Episode 36

The third in our new series, Bytes, where Ash and Katie discuss 3 things from the latest Digital Works Newsletter.

In this episode we'll be discussing some of the links in the newsletter that went out on October 20th.

We talk about:

When we discuss ChatGPT we also touched on this piece from Kevin Roose in the New York Times, and this speech from Mo Gawdat at the Nordic Business Forum.

You can sign up for the newsletter at thedigital.works.

Ash:

Hello and welcome to the Digital Works podcast, the podcast about digital stuff in the cultural sector. My name's Ash and in today's episode, episode 3 of Bytes, which is our regular short form series in which we look at three interesting things from our most recent Digital Works newsletter. You can sign up for the newsletter on our website, which you can find at thedigitalworks, and joining me today and for all episodes in this series is the person who puts that newsletter together, my colleague Katie. Hi, katie.

Katie:

Hi Ash.

Ash:

Today we're going to talk about some things that were in the newsletter that you sent out on October the 20th, and I'll put a link in the show notes for this episode to that. And the three things that we've picked this time are an article in Bloomberg titled the Moral Case for no Longer Engaging with Elon Musk's X, an announcement from OpenAI that chat GPT can now see, hear and speak, and the release of this month's version 2.2 of the web content accessibility guidelines. First up, we have this article in Bloomberg from Dave Lee titled the Moral Case for no Longer Engaging with Elon Musk's X, and it starts the former. Twitter is incentivising violent content, which will only become worse to stand out to users. What does this article tell us?

Katie:

Oh dear, this is so sad to me. One thing to say right off the bat about this article is Dave Lee is a very respected and knowledgeable tech reporter who has worked for many years reporting on kind of the tech industry, so it's not a sort of written by some kind of inflammatory type person. So essentially, what this is talking about is how, since Elon Musk has had Twitter, which he's now called X, he has changed the way in which the algorithm works and he's also introduced something called creator payments, which means that people are incentivised basically to generate engagement. I think the phrase he uses is something called engagement farming, as in Dave Lee, the journalist uses that phrase. What the article is talking about specifically is an example of a video that was shared by one particular user of a violent incident very unpleasant got huge amounts of engagement. That person is going to be rewarded with money for that. It just indicates how bad it's got on Twitter and I think the ethics of it are incredibly difficult to justify now in terms of keeping an account.

Ash:

Yeah, and I think it's sort of getting scary now, isn't it really, you know, with everything that's been in the news recently? But beyond that, we're seeing the outcomes of chasing and monetising engagement with absolutely no ethical boundaries 100%.

Ash:

And typically the things that drive the most engagement on the internet are inflammatory, violent, divisive, insulting and dangerous, and that is what Twitter X now is. It is a platform that both, as you say, prioritises this content in its algorithm and rewards people with money for engagement, which is a neutral term, but the stuff that's getting engaged when it is absolutely not neutral content.

Katie:

Exactly. It's so depressing to me that a platform like Twitter has gone this way. It was always such a useful tool. You know. I say this having not closed down my own Twitter account. Let's be clear. Substrat still has a Twitter account. I'm calling it Twitter. It's called X now, but you know. But I do think it's going to come to the point very soon where, you know, arts organizations are going to have to think very carefully about whether they want to still be on this platform. It's much more than just, oh, you know, elons ruined. Twitter is fundamentally changing the structure of how it works and what is there for you say, what's prioritized on the platform, so it can only go one way from here.

Ash:

And you know twitter has been problematic in various ways for a long time. You know, under previous leadership there would certainly Problems, but also there was the odd attempted solution to solving those problems, or at least there was an active, yeah, strategies that were clearly going to make things worse, and Dave says in his article x is now an app that forcibly puts a parent content into users feeds and then rewards financially the people who were most successful In producing it, egging them on to do it again and again and make it part of their living.

Katie:

Is very depressing. One side note I will say is another article that is sherd in the newsletter. What's an article about how npr, who shut down their twitter account in april this year and they had, you know, a million plus followers. I think as a media organization, that was seen as being a very bold move to remove yourself from twitter, but they've actually found negligible impact in terms of traffic to the site. So I think, for those organizations who may be thinking about leaving the platform, probably fine that it won't have that much impact to anything that you actually doing.

Ash:

Yeah, we've talked previously about the australian broadcast corporation coming off twitter and again not seeing a negative impact from that decision, and it does feel like the moment is fast arriving with. Cultural organizations need to seriously engage with that question yeah.

Ash:

Next up. We have chosen to highlight this announcement from open ai that chat gpt can now see here and speak, and they announced that, entirely as you might expect, is a positive thing. They have some sort of cuddly, banal examples of usage where they say snap a picture of a landmark while traveling and have a live conversation about what's interesting about it. What's in another article about this announcement that you shared with me, an article in the new york times by kevin roose. He says the ultimate goal, according to some researchers, is to create an ai capable of processing information in all the ways a human can, and this feels like another incredible but worrying leap forwards in the capabilities of these types of systems, these types of Large language models.

Ash:

At lm's and I last week saw Video from the Nordic business forum from someone called mode gordat who was giving a speech called I today, tomorrow and how you can save our world, in which he says you know we keep underestimating the speed at which ai is developing and that's because we have this belief that in the past has been valid and applicable about the path of technological advancement defined by things like moor's law, and the reality is that is completely subverting and exceeding those expectations and advancing way quicker than anyone could reasonably have expected. So now that chat, gpt can see here and speak. What do you think this tells us?

Katie:

I like it. No, I mean there's two different things that there isn't, that there is the capability of the thing and then there is how you do or don't regulate the thing. I mean, you know Sam Altman, who is the founder and CEO of open AI. So open AI are the creators of chat, gpt and Dali, the image generative AI tool. He's been very clear at saying governments need to consider how we're going to regulate these tools. As you say, they are coming on at pace and if we don't do it soon, we're going to probably regret that. So, definitely, 100%. There is something about regulation and I made a slightly flippant comment in the newsletter when I shared that article to say, oh, you know, I'm sure it'll end well, but I suppose if you leave that aside, if you leave the sort of regulation aside, it is incredible, isn't it, that the tool can now look at a photograph. So, in the article that you mentioned from the New York Times and maybe we can put that in the show notes, although it's possibly subscription only, but he does things like gets it to look at an image, a photograph of the front of the printed New York Times, and summarise what is in the photograph. Now, if you actually think about what the tools doing there. It's quite mind blowing.

Katie:

He goes on, though, to talk about how it's more the voice AI that he finds really incredible, in that you can talk to it and it'll talk back to you. And you know, let's be clear, this is not a Siri or an Alexa type tool. This is something that is talking to you in an interactive way, that is responding to what you're saying and, accordingly, you know, again he's saying you can have very sort of open ended, free form conversations with it. Again, I think it's natural that your brain tends to go to that mind does to. Oh dear, what's the negative of this? But you know, if we leave aside in terms of the use of these tools, you know, whether in personal life, but also in professional life, to get it to help you, you know, in your day to day work, create content, sense, check things, etc. Etc. It's amazing like it's incredibly powerful and positive. Yeah, it's just. It's just the some risk that goes along with it.

Ash:

Yeah, we will cover that policy regulatory aspect of this, I'm sure, in the future. But the practical usage, day to day usage, the people could get out of this. It can be really useful. It can massively short cut otherwise laborious tasks, you know, feed a research paper into an ask it, summarize it to you in a way that is practical and useful for you. Initial idea generation, prototyping of things, you know, giving you a starting point for work or helping you to make sense of large data sets. It's really really good all that stuff. But, as you say, there are other implications which society needs to start engaging with, I think, for sure.

Ash:

And last up, we've got an article from Craig Abbott titled WCAG 2.2 and what it means for you. Now I will explain some of these acronyms. So WCAG stands for the web content accessibility guidelines, and those are guidelines that are published by the web accessibility initiative, the W? A I, of the worldwide web consortium, the W3C so many three letter acronyms there, but basically these are guidelines that describe how the internet, and particularly specifically content on the internet, can be made accessible. It's a best practice framework. These guidelines operate under four main principles, which state that content must be Perceivable, operable, understandable and robust, and a lot of legislation is linked typically to these guidelines.

Ash:

You know around accessibility regulations, and these new guidelines were released on the 5th of October. They do keep most of the version 2.1 guidelines, which many people may be familiar with, intact. It does add nine new criteria. It removes one criteria and of those nine new criteria, six of them, so two thirds of the new things fall under the A or AA levels, which is the standard that most people listening to this are going to be interested in. Those are the standards that, when it becomes a sort of regulatory implication, that's the sort of standard we're talking about. But what else should people be thinking about when it comes to the release of a new set of guidelines like this?

Katie:

It's tricky, isn't it? You know these guidelines are so important and the previous version of the guidelines have done such a good job in terms of making accessibility much more front of mind for people, which is obviously imperative. One of the challenges, of course, with accessibility guidelines for a lot of arts organisations is if you don't have an in-house specialist, they can seem very overwhelming. Some of the guidelines for example, things around content, making sure you have meaningful alt text on images really straightforward to understand. Others less so. You know, some of them touch on coding and I think there's a risk therefore, quite understandably, that organisations sort of you know put it to one side and think we haven't got the time or the resource or the budget to think about this. But you have to. It's really it, the new guidelines, as you say.

Katie:

You know I'm not an expert in accessibility at all, but they don't seem to be adding anything onerous. They are really just refining and, you know, making the guidelines better. I guess you know the advice we would give anyone is number one don't resort to an accessibility overlay to think that that will solve the problem of your website's accessibility. And if you're unsure where to start, there is a lot of really good guidance and research out there and you know, yes, there are also accessibility specialists you might want to go to. But if you don't have the budget for that, there are lots of free resources that can help, and I guess it's a case of just make a stop. You have to. Yeah, you can't ignore it.

Ash:

Yeah, and I think you know, as you say, there are lots of fantastic consultants and agencies with this specialism that work specifically with cultural organizations, really understand the nuance and priorities of cultural organizations. But also you will have many, many people who have specific access needs in your audience. Reach out to them, you know, to understand what their needs are and work with them to make your digital content and digital experiences more accessible. As you said, katie, this stuff is super important.

Katie:

Yeah, as it says in the summary. You know there's nothing in the new version of the guidelines that is insurmountable or really complex. So from that perspective it's all positive. It's just, I think from our perspective it's reminding people about the importance of accessibility and of these guidelines.

Ash:

Given Twitter's slow and seemingly inevitable demise. You can also find us on LinkedIn, which, weirdly, has become our sort of main platform of choice, so maybe see you there. Digitalworks, katie and I can all be found on LinkedIn. This is a new thing, so do let us know if you have any questions, comments, suggestions for improvements, and we'll see you next time. Thanks for listening to this episode of Bites. You can find all episodes of the podcast on our website at thedigitalworks, where you can also find more information about our events and sign up to the newsletter. Our theme tune is Vienna, beat by Blue Dot Sessions. And, last but not least, thanks to Mark Cotton for his editing support on this episode. See you again soon.

People on this episode