Technology Discourages Critical Thinking?



In a recent trip to the Netherlands, we had the chance to see my very excellent cousins during a family visit. One of them is employed in the business of data and privacy protection, advising governments, universities, and corporations on the risks and remedies of data gathering, use, and security. I always find his stories fascinating and, frankly, a bit petrifying. When you learn of data breaches, big data analysis and subsequent  capabilities and practices, along with actual or potential consequences, it makes you want to withdraw from all social media and buy only products that are NOT wired.

We’re all aware of the outcome of sharing data with commercial interests, with online shopping being the most obvious. You know, when you search Amazon or some other online retailer for something of remote interest and then choose not to act on what you learn, it comes up in the strangest places as you serf the web. Browsing the local ski area to see if they have snow, and suddenly pops up an advertisement for that one-handled pipe wrench you looked at just the day before. Or having looked at all kinds of different door latches to repair that nagging closet door that won’t close, you’re simultaneously presented a variety of door handles you might be interested in, with styles similar to the latches you spent more than a few seconds viewing. Or  as you’re listening on a streaming service to a favorite musical artist back from years gone by, you’re presented with artists and songs you’ve never heard of that you might like. Your interests and likes are recorded at every turn, with every click on a product photo or selection from a menu of options, and then processed through an algorithm to determine interests or products that are likely to appeal to you. 
The part of this recent conversation with my  cousin the data-privacy expert I found most interesting was the hypothesis that this same kind of pre-selection and alignment of interests might also contribute to the further polarization of political and social discourse. Now stay with me for a moment - the premise is that as you select from options of news or media sources or share your likes on social media sites, algorithms are utilized to tailor future presentations of news and information to fit those proclivities. In other words, you see more of what you already think, or feel, or believe, or agree with, and less of what you don’t. That reinforces those thoughts and beliefs, and marginalizes contrarian views or evidence. The suggestion was that it had the potential to further dull critical thinking, narrowing our field of vision. In an earlier posting, I shared the metal model of a soldier versus scout. See "Truth or Fact" ]  Soldiers search for evidence supporting a predetermined narrative or hypothesis, discounting or marginalizing contradictory evidence. Scouts, in contrast, have a curiosity that causes them to seek evidence that disproves a hypothesis or narrative to expose alternative realities. What my cousin was suggesting, I think, is that big data, and corresponding algorithms, when applied to social information, feeds the soldier and starves the scout. The risk is that it discourages the search of knowledge and truth, and further polarizes societies. If so, it would further demonstrate another level of unintended and unfortunate consequence coming from the evolution of information technology, and our willingness to share personal insight.

He might think I’ve over-extrapolated his point, but what the heck. It leaves a lot to talk about for our next visit.

Comments