On Digital Rhetoric — informing the future of Experience Design in man-machine interactions & human agency.

Digital Rhetoric as a topic of interest today surfaces across many faculties in academia. This fact adds inherent pliability to its definition as well as fluidity informed by an evolving understanding of what it means. Given this Digital Rhetoric requires an operational definition for this post to begin to address some of its application in the general field of Digital Design and User Experience, respectively.

The definition of what constitutes Digital Rhetoric as a field worthy of study is still in its evolution as we grapple with the new modes of media expression as well as how humans interact with these forms of digital media.

The nature of “rhetoric” has many origins and re-surfacing; however, the most popular and most often adopted stems from its Classical roots. It simply means — to persuade; it also refers to a study of persuasion as well.

Rhetoric is simply the art of persuasion. Along with grammar and logic, it is one of the three studied ancient arts with classical rhetoric focusing on the tutelage of the speakers to achieve effective persuasion in various realms, such as politics and law.

Without splitting hairs between the Platonic or Aristotelian definitions, it helps to understand some of the nuances of the meaning. Rhetoric is a speaker’s ability to determine what resources are available to persuade an audience. The persuasive speaker presents him or herself as someone with good intentioned to convince an audience.

Plato, however, saw rhetoric as the persuasion of ignorant masses within the courts and assemblies. He believed that it was a form of flattery and stemmed from ill-intent.

Digital Rhetoric is the art of informing, persuading, and inspiring action in an audience through digital media. It is an advancing form of communication composed, created, and distributed through multimedia platforms.

In this post, I am focused on any instance of man-machine interaction, including the Internet, where the human experience has a level of questionable agency around their own actions.

Agency is defined here as:

Agency is the capacity of an actor (person) to act in a given environment. The capacity to act does not at first imply a specific moral dimension to the ability to make the choice to act, and moral agency is, therefore, a distinct concept. In the studies of social science, an agent (person) is an individual engaging with the social structure.

  • Is forced usability, informed by the use of forced UI affordances, limit users outcome due to enforced control of technology?

  • Do hidden affordances that do not benefit users and advance business needs subjugate our agency in cyberspace?

  • Does the use of our data usage to manipulate outcomes limit how full agency.

  • Do we mean to do the things we do online?

  • When a company tracks you around the internet because you looked at a pair of shoes seven days ago, does that constitute some level of stalking?

  • Did you need that new phone or did cyber Monday make you do it?

These are all questions relevant to Digital Rhetoric as an area for exploration. What, if any, is the degree of coercion by the interaction with computers in our lives.

And as this form of communication become even more advanced, aided by Artificial Intelligence, the questions become even more complicated. From the ads that follow us online to the political conversations on social platforms, there is strategic deliberation in the conception, composition and distribution messages in multimedia platforms, to inform the desired action.

So what is the role of Human Experience Designers in this world in this new world order?

Human to machine interactions borne out of Artifical Intelligence in Technology subjugates our natural human agency and preys on natural human tendencies to the ends of both unknown and known agents in Cyberspace and beyond.

We do not often think of Rhetoric when we think of our day-to-day use of Computer usage. As we meander through our daily interaction with computers, it behooves us to think about whether or not the flash decisions we make are solely ours. Algorithms are very good at manipulating data and their inherent structures to inform and goad us into actions that many of us believe are our own. And yes, these elements are all part of persuasive rhetoric.

I fell in love with the general topic of Rhetoric when I first took a graduate-level course, Metaphors of Computing. It seems to be the perfect convergence of Computers, Linguistics and Philosophy that beckoned a new way of thinking about human agency. Years later, I am still fascinated by the topic as Computers have become more prevalent in our daily lives. And my interest and awareness of this are further heightened by my work as an Experience Designer, advocating for human (e) — centricity in design.

Informing Human Experience Design

So you may be thinking — what does that have to do with me as an Experience Designer?

A few years ago, as a Software Designers Manager, one of the Interaction designers requested a few minutes to ask me a question, which up to this point I had thought about but never been asked.

He wanted to know why we had started allowing all users to auto — opt into usage and some personal data collected during the installation and use of a product.

He was concerned that people would overlook a small checkbox affordance and use the application, not realizing that their user data is tracked. Further, he was not clear on the contractual obligations that the company had with Amazon a company notorious for massive usage data. What should he do with that part of the UI? In his words —

This is sort of creepy and not kosher business practice.

Given his anxiety around this, I escalated the concern to my boss, a Director of Business Strategy. His firm response [para] went something like:

… well we are already giving them so much for free in this release and this instance of the software is being deployed in the BRIC (Brazil, Russia, India & China) countries, first as a pilot, and for students anyway, so they should be happy to be getting something for free.

I don’t know if I was more appalled led by his geographic references or indifference to the use of peoples data with such casualness. Born in a “so-called” Third World world country, I was mentally and emotionally numb for 30 seconds before gaining composure. Sadly I saw a side of him that set the tone for the rest of our engagement; I know he saw it too as he tried to make one of is missing jokes as I left with a faint

I see how it is.

Like many who have dealt with this corporate mentality, I felt this laissez faire attitude could come to no good, and if it doesn’s feel good, it probably isn’t. This encounter was just one of many canaries in the coal mine of Data and Privacy that we bypass every day as “that’s the way it is.” But is it?

Either way, we were overridden, and today this software is widely released; I am not sure whether the affordance to opt-out was made more salient, but these are some of the clues that Designers and others need to keep abreast to ensure they are making the right design decisions for human users.

It would take a tech-savvy, sharp-eyed user to uncheck this box, and many users are not the very sharp-eyed user of technology.

Here in this personal experience, lies a problem of not understanding what we sign up for when we allow tracking of our data. We, as average users of technology, sign up to data collection not fully complicit in the understanding of the act, and the subsequent consequences. We are not clear on the impact and when and if, in the future transaction, the metaphorical “monkeys on the back” will take over our agency in man-machine encounters.

I tell this story to underscore the baked-in and overt persuasion tools represented by the deliberate strategic decisions, made by humans, in corporations. So-called “strategic decisions” made by humans also extend to the unauthorized sharing of our data with partners, for their ends.

What can Experience Design do to help

As an Experience designer, one of the things I advocate for is that if a UI element does not pass the “creepiness” and ethical design test, then step back and ask the necessary questions to the end of that make you feel okay. However, this does not override the usage data inadvertently captured with each keystroke. These aspects are often beyond the control of most designers.

Second, consider joining Privacy and security working in your company as the voice of the customer and helping inform the strategic design of solutions that pass both creepiness and ethical considerations.

Lastly ensuring that if personal data is needed, let users know why they are giving away, for how long will it be kept and all the other questions that consumers today want to know about their data’s security. Transparency of engagement can only strengthen the trust that consumers will have using your products and services.

To conclude, this post is a primer on the topic, and I invite comments, thought, and further discussion as more voices are needed. Ensuring that our customers are seen as human beings with their agency is essential and requires a new level of reflection in an era where anything and everything is for sale.

Next
Next

Innovation & Human-Centered Design in the Age of COVID -19 — Developing Temporal Personas