Week 3 Reflection & Discussion
Question Two:
How can we analyze the ways that members of a culture use technology as a locus for evolving or conflicting cultural practices and social change?
Standage’s history of the birth and adoption of the telegraph takes in events which are still familiar to us today - the wrestling with legislation and the desire to regulate by elected officials who struggle to understand the technology itself. It is a repetitive locus which appears whenever there is a legislative need to understand the new. We see this in Morse’s 1842 appeals to Congress for funding as much as we do through recent committee hearings on Facebook’s data privacy or OpenAi’s risk management of the robots taking over. In many instances, those who would seek to understand these technologies are years behind actual public use. In Morse’s case the broad adoption hadn’t happened yet, but in Meta’s case it was years behind the hundreds of millions of users for whom relaxed privacy guidelines contributed to a highly malleable, monetizeable and normalized online behavior. The artificial intelligence train has definitely already left the station and is already deeply woven into our lived experience.
But what unites these cases across time is sustained concern about unintended risk. Of spending public time and money on something which can neither be seen nor understood, especially by those in positions of constitutional responsibility to their constituents. We cannot ‘touch’ the means by which a telegram is transmitted any more than we can ‘feel’ how Ai works. Many of the recent congressional hearings vividly illustrate the wide gulf between user and legislator, where elected officials grapple to understand how even the most fundamental aspects of online behavior function. Politicking is everywhere, and the space between innovation and responsibility becomes disputed. Any regulatory effort is often years late. But the thing with unintended consequence is that it’s almost impossible to see, especially as we still live through it. Did Mark Zuckerberg really see how Facebook would be weaponized to influence political outcomes from his dorm room at Harvard? Did Morse really understand ‘what god hath wrought’ when he asked for funding? Is asking for stricter regulation around the use of artificial intelligence a means by which Sam Altman can pull the ladder up after himself and secure his own competitive advantage? In Standage’s account we see familiar conflicting echoes of congressional dismissal and the need for technologists to justify their own behaviors in order to keep on the right side of the law, but also attempt to frame their future benefits for a wider public.
Question Three: Connection ideas from this week’s readings, lectures, and/or media to topics of your own interest
I’m fascinated by the idea of the cultural assumptions we make when faced with the introduction of technological innovation, and in the delta between what we think it can be and how we actually end up using it. Edison’s phonographic dictation machine evolving from the efficacy premise of the photographic recording of sound for the purposes of business, into the enormous industry of sound recording itself illustrates the space between what we think we want, and what we actually want, and how speculation often only exists as our best guess. And how in this speculation, we are forced to make assumptions. About adoption. About value. About risk. Bad actors temper the triumphalism of those who innovate. And while everything has consequence and cost, we rarely know the terms of the exchange. We thrill at the science fiction of artificial intelligence as it becomes very real, while at the same time fearing our own obsolescence as the robots take our jobs. We simply cannot understand the cost while we are simultaneously living through it.
Even with the recent rise (and even more recent decline) of social media, the cost and consequences are only starting to be understood. An enormous amount has happened on the internet over the past twenty years, and innovators have had to make a lot of assumptions along the way. Product teams building these experiences make assumptions about user behavior and degrees of risk they feel are acceptable. They make assumptions about operating cost and expected revenue. Engagement and habituation. Privacy and targeting. These assumptions are informed by qualitative and quantitative insight, but they are only hypotheses until the products are in the hands of users. And by then it can often be too late.
A recent example of artificial intelligence assumptions gone awry came from a Guardian article reporting on the death of a young woman. When the piece was aggregated into Microsoft News’ platform, a poll generated by artificial intelligence and placed adjacent, asked readers to speculate on the cause of a woman’s death and offered them three choices. (https://www.nytimes.com/2023/11/02/business/media/microsoft-guardian-ai-poll.html). Highly inappropriate, but the unintended consequence of automated engagement next to a serious piece of journalism. And while Microsoft walked the experience back in the most corporate of language, saying ““A poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future”, it is already too late, especially for those connected to the young woman. Retraction and apology doesn’t mean anything when the technology has already resulted in human harm.
Discussion:
The transformative potential of the telegraph didn’t just collapse the time and space of communication, it changed communication between people itself. Motivated by economic constraint - senders being charged by the word - a new shorthand and ‘common form of abbreviation’ was developed where, for example, ‘SFD’ was understood to communicate ‘Stop For Dinner’. Even though this was the shorthand of the mid-nineteenth century, it’s still a highly familiar form for us in electronic messaging. Initially also constrained by character count, mobile messaging strongly echoes the telegraphy shorthand of abbreviation in our (now widespread) understanding of electronic shorthand. We LOL when we read a joke. We TTYL when we sign off. We ask for others to LMK ASAP and berate them for TMI when they do. Even further, even letterforms have given way to the sparse symbology of emojis. We thumbs up our agreement. We send hearts and kissing faces for love. We send fire and biceps for excellence.
It’s abbreviated efficiency, but it’s also a form of communication in itself, and can often serve to separate as much as it expedites. My grandmother still send me LOL (lots of love) when she ends a message. When she recently heard our pet had passed away, she sent us an awkward note, which read ‘thinking of you all today, so sorry to hear the news lol’. Unintended consequence indeed. Our reaction? OMG. Abbreviations fall between generational divides as much as they do economic and cultural ones. Professionally I still have to think about what TL;DR actually means as much as FWIW and XO denoting hugs and kisses.
In Midgeley Jr’s example, and inadvertently echoed by my grandmother’s texting, we can only really understand unintended consequence in retrospect. And sadly, that understanding often comes late. In Midgeley Jr’s example, the ‘unknown unknowns’ are so large as to be existential for us as a species. For as much as we are seduced by the potential of progress, there is always cost. And many times it is a human cost. We may love the feeling we get when we upgrade our iPhone, but there continues to be enormous invisible human cost on the other side of that transaction in the mining of elements and mass production of our devices. We love the innovation of next-day delivery by are indifferent to the human cost by which this happens.
This week I’ve been thinking about Dr. Feaster’s thoughts around the time-shifting of live performance which came with the introduction of recorded sound, especially in the context of our question of how culture affects our understanding of a technology and how we use it. In an era of digital streaming, time-shifting is everywhere. Even events which we perceive to be live, are rarely that. Live sports and political gatherings are always on a time delay, or presented as live when they were expressly pre-recorded to simply appear so. And where the immediacy of live has moved from corporation to individual through the use of mobile technology, live as an experience is still highly mediated. There is always someone between the sender and receiver, as much on a Zoom call as there was in the sending of a telegram. So it’s not just recognizing that there is a time shift, but asking who is time-shifting and why. In mediated instances, much of this is in place for legal protections (think of the breakdown in process during Will Smith’s Academy Awards episode), but in individual streamed instances, this mediation is much less pronounced. Google is the intermediary between streamer and viewer on YouTube, but in doing so, monetizes both creator and viewer attention.
What do you think?
Disclosure: I work for NBC News and am highly involved in a large-scale mediated event this week: The Republican candidate debate on November 8th in Miami: https://www.nbcnews.com/politics/2024-election/third-republican-debate-presidential-race-host-nbcuniversal-rcna120093
Follow-Up: Taken By Myth, Or Just Mythtaken?
This week we talked a lot about the myth of progress. And how we can only really understand such myths in retrospect, and potentially not at all as progress doesn't have an ending point. Only pauses and changes in direction. That our understanding of progress is often that of a roiling sea which crashes against the ship of lived experience, hurling us this way and that through life.
But one of the things I'm thinking about inside of the myth of progress is the importance of human agency. For as much progress as we may think is happening all around us, and for as much as we may connect our cultural systems of belief, economics and lifestyle to them, all of this is still a choice. But it's often a choice of privilege. There are luxury hotels and vacations which offer digital disconnection services just as much as the underprivileged can be excluded from the opportunities such perceived progress might bring. I keep coming back to the same question each week. Progress for who?
I think we often lean on progress as a proxy for 'everyone'. We also use progress as a substitute for 'better'. We look at the quantitative data around global life expectancy increasing while overall poverty shrinks and we conclude that a narrative of progress is happening. Yet such progress isn't created equally. For as much global digital connectivity there is, the global 'we' is still facing down enormous problems of climate, conflict and inequality. So perhaps it's not myth of progress, but more articulation of upward and downward trajectory. Is it still progress when only specific cultures benefit? And how these trajectories always come with cost. The narrative of progress in growing digital connectivity comes with environmental and human cost to power it. The narrative of increasing global health is still coupled with a recent global pandemic which infected 770 million and killed 7 million.
So perhaps progress is only ever a narrative at best. And like the best myths passed down through oral tradition, a set of stories we tell ourselves to help make sense of the roiling world around us.