The Tommy Edison Experience is one of the Youtube vlogger channels I’ve subscribed to and tend to follow. Just today I saw them tweet about their latest video with a alert coloured headline: “Our channel is in trouble”. Yellow text on black background, a bit like the tabloids use on their posters.
The Tommy Edison Experience had noticed that people who have requested notifications for new postings on the channel, no longer got the notifications for each video. As typical for these kinds of vlogging channels, their business model leans heavily on how many views their videos get within a short period of time right after the publication. Hence, such notifications are vital for these channels, and it is alarming, when a channel discovers that notifications are not being sent.
Basically Youtube has a monopoly as a platform to publish videos. There are some other platforms, but they are not really known by the great public. Actually, the next best thing is publishing videos in Facebook, as it dabbles with the possibility of uploading videos and even streaming live videos as well. Still, if you ask a random person on how to have a video on Facebook, in a high likelihood the person answers that you put it on Youtube and post a link to Facebook.
The thing that I believe to be going on here, is the further introduction of machine learning and data analytics in helping people to manage information overload. For long now it’s been so that in Facebook you don’t get everything on your feed that is being posted by your Facebook-friends, if you have enough many of them. I’m certain that my over 500 Facebook-friends generate such a gush of posts that I could continuously scroll it through and not be able to reach the front of it, or even keep it from escaping. I follow over 600 accounts in Twitter and even there I can trust that every time I refresh my feed, there are new posts available to read. And those are limited to 140 characters, unlike Facebook updates.
There was an uproar some while back, when people realised that Facebook is doing automatic filtering. But, for the user experience’s sake, they have to.
These are the same technologies that are used in online travel agencies and other stores. In an online travel agency, the system observes, which travel opportunities you are seeing through and starts 1) targeting related opportunities to you in the advertisements, and 2) uses pricing strategies to maximize the amount of money they can get out of you. If you check for flights to Malta and get some quotes, and then do the same a little while later, the prices are then higher. This is by marketing design, not by coincidence. It makes you feel like the seats are being sold out and if you want to travel, you should buy the tickets right now, before the prices raise even higher. The seller benefits from both motivating you to close the purchase, and from getting more money from you than from the first offer.
For a couple of years now, I have been wondering, where did the adaptive spam filtering go? Certainly, my mail readers have a button to mark e-mail as spam, and a junk mail box. Mail is getting automatically thrown in the junk mail box, but quite a bit is still getting through, and the experience of marking mail as spam leaves me without a sense of agency. The feature is too invisible. Is it learning something? Also, this field seems to have become silent in popular discussion as well. When the feature was first introduced in mail readers, it was discussed. Now there’s no public sign of this technology being developed at all, regardless of the fact that machine learning and data analytics are trending.
Another hype term that is relevant here, is big data. There’s more and more videos offered in Youtube, postings published in Facebook, spam sent though e-mail, and things sold online. Still, it is limited, how much data a single user can process. Hence, big data, machine learning, and data analytics are being applied to design digital assistants – the equivalents of secretaries, who decide which mail, appointment, and phone call are being delivered to those they work for, and which get filtered out or postponed. I can’t afford a person to work for me as a secretary, but I can afford having my mail reader to do the job for my e-mail, and have Facebook filtering system to do the job for the postings by my Facebook friends.
On the other side of the board, Facebook and Youtube can’t afford an army of moderators to go through all the material uploaded in their service, or probably even to process all the reports the users are sending about material needing moderation. There are rumours that they are using intelligent algorithms here. A bit like adaptive spam filters. Such a service could be based on neural networks and other clever data analysis tools, but even something as simple as counting how many people report an incident could work quite well. Essentially, it worked for the Google search engine, until people became more aware of search engine optimisation, and the abuse of too simple algorithms started failing too much in trying to distinguish relevant search results from irrelevant ones and downright spam.
It’s worth noticing, that it is Google that owns Youtube today.
As the amount of data is increasing and these features are being developed, this might be the first time that it practically matters, if you are a paying customer of a service, or the sold product – is it you who is paying for the service, or are you using the service for free, and therefore someone else is paying for the decisions on how the services are being developed?
This is actually not a two-way street, but there are three parties to it: 1) the people who are using the system (Youtube users, vloggers, Facebook users), 2) the people who are paying the system (advertisers, commercial Youtube channels), and 3) the people who are profiting from the system (the Facebook and Google corporations). The third party here has the biggest say on how the system is being developed, as they want to maximise their revenue. The second party has the next word, as they decide whether they pay for the service or not. The first party essentially has the least to say, but still, they are the content providers, and they have to be content enough to keep using the service for the service to have anything to sell.
In the future, our cars will be driving themselves, and our computers are likely to be very much like Jarvis in the Iron Man movies – artificial intelligences that truly fulfil the definition of the companion model of interaction. Some people fear that this will be the end of us – the machines will take over and destroy us – but I am more optimistic. (One of my next postings will be more about this optimism of mine.) The real threat here is not the technological apocalypse, but rather, who of us will be in control of the changes, and who of us will be just swept among the flow. At least you should be aware of this being the way things are, rather than being paranoid and delusional of there being conspiracies of evil people that are out to get you.
 Tommy Edison and Ben Churchill: “Our channel is in trouble”, The Tommy Edison Experience, 3.8.2017, https://www.youtube.com/watch?v=JaOP2b4PbtY (referred on 4.8.2017)
 Stuart Dredge: “How does Facebook decide what to show in my news feed”, The Guardian, 30.6.2014, https://www.theguardian.com/technology/2014/jun/30/facebook-news-feed-filters-emotion-study (referred on 4.8.2017)
 Sophie-Claire Hoeller: “This is the one thing you should do when searching for flights online”, Business Insider, 5.10.2015, http://www.businessinsider.com/clear-cooking-when-searching-for-flights-online-2015-9?r=US&IR=T&IR=T (referred on 4.8.2017)
 Janet Murray: “Inventing the medium: principles of interaction design as a cultural practice”, Mit Press, 2011.