Automatism + Automaticity – 20th Century

If you thought my previous introduction was dark, then if you use your imagination a little to color in the details of what comes next, then you may actually begin to lose your faith in humanity – but I choose to spare you such sketches, because I myself don’t even want to have to stomach such gory images of reality. *

Throughout the 20th Century, there was much interest in automatism and automaticity. The beginnings of this interest can easily be traced far back into the 19th Century, but rather than meticulously detailing the historical background, let me point out that the interest in these topics was from several different sources. Industrialization generally raised interest in machinery and automation. At the same time, Marxist ideas contrasted capital with labor. There was also much scientific interest in logic and reasoning, and finally there were also significant scientific advances in medicine and also in related fields such as psychology.

By the middle of the 20th Century, a new industry known as “media” had become established – and this new industry (with roots in publishing dating back several hundred years, ultimately to the invention of Gutenberg’s printing press) was strongly aligned with and likewise strongly based in capitalism. This led to much research and development in promoting and increasing the productivity of capital investment, resulting in less interest in human-centered (“humanist”) analyses (final result: more interest in profit maximization).

In this context, there was an increasing divergence between machinery and automation on the one hand, and human interests (including notions of “humanity” and “humane behavior”) on the other. Increasingly, humans became an input into algorithms focused on maximizing other measures, such as output or profit. Today, automatism is popularly viewed as a dystopic Luddite horror story rather than in a context of scientific fascination with natural phenomena.

The ideal scenario in this scheme is the “making money while you sleep” image, that of a fat slob sipping a drink, gazing at bathing beauties, while relaxing under a palm tree along the beach on some remote island beside a laptop tallying up the money rolling in as dumb laborers in some grungy industrial town far away work in sweatshops to scratch together enough money for rent, food, clothing and maybe every now and then a cigarette.

You just assumed that someone was paying attention.

Nat Simons of Renaissance Technologies, quoted in Malcolm Gladwell, “Talking to Strangers”, chapter 4, audio version 12:52 [talking about the Bernie Madoff ponzi scheme fiasco]

Apparently, no one was paying attention to the ponzi scheme fraud perpetrated by Bernie Madoff – nor to any of the dozens of cases of crimes against humanity documented in Malcolm Gladwell’s book. There are many many more cases throughout the 20th Century where apparently no one was paying attention. In most industrial countries a large portion of the population ingest chemicals to help them pay less attention, produced by industries making ever more profits. Is this a case of automaticity in action? Is this good or bad, right or wrong?

Many industries reap large profits by manipulating information such that humans automatically behave in ways reminiscent of Pavlov’s experiments with dogs. These industries are able to reap so much profit, that they are willing and able to invest large sums into research and development – not about products or services, but rather about marketing products and services to consumers willing and able to pay for them, leading to more profits for these industries. Is this a case of automaticity in action?

When we utilize a search algorithm, are we aware of the way that search algorithm works? A few years ago, I asked Matt Mullenweg to pay attention to this question. There were thousands of software developers in the room – you could hear a pin drop. Later, several developers spoke with me and laughed at how absurd it was for me to question Google’s authority in this field. Is this a case of automaticity in action?

The 20th Century is over, but we need to be aware of our roots. There are legacy technologies. Each legacy technology potentially gives rise to its own distinct legacy automaticity. We are morally accountable for our decisions to use a technology, or to refrain from using it. If Greta Thunberg can choose to cross the Atlantic Ocean in a boat, you can choose to behave rationally the next time you search for information.

We are free to choose. Will we choose the automatism of a Pavlov dog? Sometimes, yes. Always? Well, maybe we ought to ponder the alternatives a little more….

* I am reminded here of Susan Sontag’s excellent “Regarding the Pain of Others” – if you need a little more disgust in your life, I recommend picking up a copy.

[thank u, next]

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Automatism + Automaticity – 20th Century

Automatism + Automaticity – First Thoughts

Automatism and automaticity are “real concepts”, but they are not widely used … or at least not widely used everywhere in the same way.

One of the fields where these concepts are most widely used is in the broad field of medicine (or even more broadly biology). Here, the conceptual nomenclature is sometimes more focused on automatism, sometimes more on automaticity – but in either case it is primarily concentrated on how the central nervous system signals certain processes to function automatically. This might involve simple things like breathing or regular heart functioning, or also more complex behaviors such as jumping when startled or walking without precisely being aware of the movements of our limbs, feet, the muscles involved, etc. Although I am by no means a specialist in these fields, my impression is that such automatism / automaticity are very fundamental, basic functioning – closely associated with the brain stem, the amygdala, “lizard brain” thinking, etc. My gut feeling impression also leads me to believe that there are good reasons for such automatism / automaticity to have been useful from an evolutionary perspective (e.g. jumping up into a tree might have been a good way to survive at one point in time).

Let me fast forward hundreds of millennia, or maybe even a couple million years to the present. A few hundred years ago, there were many technological breakthroughs (for example: the printing press). Soon thereafter, many related developments led to what many people today refer to as “democratic government” – what is usually referred to here is what Tom Paine meant when he wrote “in America, law is king”. Of course the Magna Carta was also a law that could be relied on, but these new forms of government introduced and expanded constitutions and similar legal rule-based systems greatly. Today, we live in a world that is to a very significant degree based on written laws. Oddly, human lives are from this perspective actually controlled by written codes.

The way I see it, both of these phenomena are about automatism / automaticity. In both cases, things that happen … happen automatically.

Indeed, there are (in my humble opinion) many phenomena throughout the everyday lives of humans, perhaps throughout all of life in general which are embedded with principles of automatism / automaticity. One of the primary reasons we aren’t talking a whole lot about them is that these things seem invisible to our awareness, or perhaps so blatantly obvious that we don’t ever mention them because we’re convinced they must be plain and simple “common sense”.

These days, such “common sense” attitudes seem to becoming more widespread. People who have been following my writings for a while will probably not be shocked to hear me say that I have been becoming increasingly alarmed at the apparently unbridled naiveté with which the vast majority of the online population surfs the World-Wide Web.

Yet my incessant discussions with friends about issues related to my exasperation over the overwhelming degree of illiteracy and the continued lack of enlightenment with respect to rational information-seeking behaviors have now led me to what I consider to be a truly rewarding outcome: Automation is not inherently good or evil; and it is a human moral imperative to pay attention to “right automation”.

What does that mean?

I don’t know yet, but I want to find out. One method I would suggest to start off with is by process of elimination – right automation is not wrong automation. I would say that first hypnotizing someone and then commanding the hypnotized patient to drink a lethal dose of poison ought to obviously qualify as wrong automation – and I would add that there do seem to be such prohibitive laws in cases of torture, inhumane acts, etc.

I hope that such extremely dire cases of depressing despotism are rare. I expect that we will increasingly pay attention to increasingly reasonable logic, rationality and reasoning as we think more and more about methods that could be automated, that ought to be automated and so on.

For example, consider search algorithms: Do we want search results to show links to any result based simply upon how much money will be paid (whether by us or by someone else)? Or based upon whether sufficient money is paid and whether the person (or computer or smartphone or robot or whatever) searching is in the United States, Europe or some other location? Do we always want the same results, or do we want the types of results we get to depend also on our own wishes? In other words, do we want to have several algorithms at our disposal – such that we would be free to choose which algorithm we want to use right here, right now, right for us? These are just some more or less random examples; I hope I will be able to figure out a somewhat more rational approach to the vast field of possibilities in some kind of reasonable way.

Let me end this first essay with such an exercise in rationality. I will use the term “automatism” to refer to the actual automation development process. For individual instances of automation, I will use “automaticity”. I think this will be roughly equivalent to the evolutionary terms “ontogenesis” (or here, “automaticity”) and “phylogenesis” (in this case, “automatism”). I think this distinction is worthwhile because I expect there might be cases in which it would make sense to think about the principles that underlie the evolution of automation versus the automaticity of any particular automaton.

[thank u, next]

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Automatism + Automaticity – First Thoughts

Only Fools Rush in to Build on Top of Closed (Secret, Proprietary) Platforms

Last week Amnesty International published a report about the online platforms of the leading surveillance giants – see my brief post about it @ Find News.

What I find particularly signficant about the news is not the recommendations for public policy, but rather the recognition of the free markets that there are now two (2) surveillance sherrifs in town – just about a decade ago, the online surveillance market was completely dominated by one (1) single surveillance monopoly power. That monopolist company still rules the Internet, but now at least most people realize that Google is no longer the only game in town.

Nonetheless, most fools on the Internet still don’t understand how foolish they obviously are.

Innumerable fools – among them the billions of suckers and leading stalwart, public companies – continue to sign up and give their private data freely in exchange for worthless garbage. The platforms they sign up for are a convoluted mess of smoke-and-mirrors that are reminiscent of a Kafka-esque machine-in-a-box, run by some Dr. Seuss character ready, willing and able to sell „stars upon thars“ to anyone prepared to cough up a little (and in some cases a lot of) cash.

The magic machinery promises to deliver results.

Western civilization has been here before. The results were: The Protestant Reformation and The French Revolution. In case you’re having difficulty connecting the dots – the story doesn’t end well for the Roman Catholic Church (even though it still seems like they’re doing OK today).

Another result was Gutenberg’s invention of the Printing Press – yet this was probably at least as much a result of the humanistic attitudes developing in Renaissance Italy… perhaps a century (or more) earlier. Another result of the Printing Press (besides The Protestant Reformation) was the birth of The Scientific Method, which in turn resulted in The Industrial Revolution, which led to further economic development … and finally: here we are!

Most of the significant technological development over the past 5 centuries are built upon a strong foundation of open and transparent information. Why would any rational human being ignore this obvious fact, and instead invest their entire future in a clandestine organization which promises better results brought to you by a secret formula?

Your guess is as good as mine! 😉

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Only Fools Rush in to Build on Top of Closed (Secret, Proprietary) Platforms

How to make facts

A guy named Edward Snowden was interviewed on the Joe Rogan Experience recently, and here is something he said:

This is the context: You say you know, and — you know, let’s put it the other way: maybe you do know. Maybe you are an academic researcher, maybe you’re a technological specialist, maybe you’re just someone who reads all of the reporting and you actually know. You can’t prove it, but you know this is going on. But that’s the thing in a democracy: the distance between speculation and fact. The distance between what you know and what you can prove to everybody else in the country is everything in our model of government — because what you know doesn’t matter; what matters is what we all know … and the only way we can all know it is if someone can prove it, if you can prove it … and if you don’t have the evidence you can’t prove it.

JRE #1368 1:51:50 – 1:52:35

Could we please sit back for a moment and ponder that suggestion in the context of science and the scientific method? Science can’t prove anything, but what Mr. Snowden is suggesting is that evidence can — and that it’s the only thing that can. I realize that many scientists as well as numerous lawyers may very well shake their heads and scoff at such a simplistic confusion of the term “evidence” from two completely different fields, two completely different traditions, two vastly separated realms of knowledge.

Yet what about the millions of men and women in the streets? What does the twitter universe tweet out across the world ad nauseam? Facts, evidence, and insurmountable floods of gossip — wrongdoing, rightdoing, likes, dislikes, regurgitation of suppositions, and whatnot other similar processed foods for thought.

We live in a land plagued with schizophrenia: on the one hand modern scientists maintain that nothing in the universe can ever be proven, but on the other hand modern journalists provide reams of evidence on a daily basis to prove to the public some facts as undeniable. This daily digest of tidbit proofs is leading to data flooding and causing catastrophic psychological indigestion for the countless global masses.

Is it possible in this day and age to reconcile these opposite world views, to bring about a little hope for coherence in our data and media diet? Why don’t we presume innocence before bombing the world to smithereens? Why can’t we acknowledge that we don’t know? Why not refute the notion of undeniability (is that even a word — how about “incontestability”)? Is there in fact no such thing as a self-evident proof?

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on How to make facts

RE: On Bullshit

A friend of mine recently mentioned he has back-ordered “On Bullshit” (by Harry Frankfurt) and I thought “oh neat – maybe I can borrow it sometime for a day or two”… but then I realized something.

Bullshit is not rare or special or in any way particular. It is widespread. Everywhere you look, you can see bullshit converging in on you. I am no more interested in studying bullshit than I am in investigating the tons of junk and/or fecal matter that might arrive at any number of dumps on a daily basis across the planet. I don’t care about the meaningless 99%, I want to know what makes the 1% especially meaningful.

My gut feeling tells me that in order to cut the crap a person must care about something in particular. I was trained by the Strunk & White school of thought which dictated that words must be chosen wisely and also with both precision and accuracy. Rationality is a very surgical matter, and errors are simply unacceptable.

This reminds me of another thought I recently had whilst wallowing through yet another quagmire of apparently endless streams of text: if you want to write something meaningful, then the meaning you want to write down is enough. I don’t need to know whether it’s your birthday or whether something else happened – just tell me what you want me to know or think or feel or whatever.

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on RE: On Bullshit

Some Reflections on the Revolution in Propaganda

More or less exactly ten generations after Edmund Burke’s treatise concerning the French Revolution and roughly about twenty generations after the invention of Gutenberg’s printing press, I would like to give you a small update on the state of news, media and publishing following the advent of modern computers on the dissemination landscape.

In this endeavor, I will utilize a case study involving a podcast video on the interwebs, in particular youtube.com, which I hope will help by providing a graphic illustration of what’s going on right now. The case in point is a discussion between an evolutionary biologist, William von Hippel, and a media magnate, Joe Rogan, concerning the publication of Mr. von Hippel’s new neato book titled “The Social Leap”. I shared a link to the entire discussion a couple weeks ago, here I wish to focus on a short segment starting at 2:08:55.

Originally, my fascination with the topic centered on the origins of human language, but unfortunately there was hardly any discussion of this during the podcast. Although there are many fascinating points regarding the evolution of homo sapiens, very little (if anything at all) was directly related to the genesis of human language. I have often noted that the very first line in the Bible’s book of Genesis directly indicates “the word” as being at the beginning of human history, but exactly how this first word was ever spoken remains an enigma. My own hunch is that it followed other types of expression – such as body language, facial expressions and the like – and that several rather complex communicative norms needed to become institutionalized (and that language was therefore perhaps far more difficult to develop than other technologies). I imagine that three evolutionary developments might have been particularly advantageous, namely: 1. increased brain size; 2. “whites” of eyes; and 3. improved vocal apparatus. Mr. von Hippel also mentions the first two of these developments.

I have heard Noam Chomsky give a ball-park estimate of ca. 75 thousand years ago for the approximate beginnings of language. Most of the developments mentioned by Mr. von Hippel predate that by a longshot, but the segment I mentioned above (2:08:55) has to do with a development that is undoubtedly much newer, since it is about reasoning and argumentation (which as far as I know must require language). The segment begins with a discussion of confirmation bias, and Mr. von Hippel then mentions a 2011 paper written by Hugo Mercier and Dan Sperber, saying the paper shows that humans actually evolved to use confirmation bias to persuade each other of their own opinions rather than actually trying to find out what is actually true. I was shocked by this statement and read the original article. Upon doing so, it became clear to me that Mr. von Hippel had misrepresented the original findings – and I have contacted Hugo Mercier and he assures me that my shock was indeed warranted.

Mercier & Sperber (2011), on the contrary, contends that while the confirmation bias may very well be active when producing arguments, it is largely inactive during the evaluation of arguments. This symbiotic relationship is crucial, and to overlook it is a gross distortion of the findings. Why did this happen?

I believe the answer to this question involves yet another development in the history of human languages, perhaps even newer than the “Why do humans reason?” development of argumentation proposed by Mercier & Sperber. Perhaps the earliest records of writing date back to cave paintings and sculptures made by humans tens of thousands of years ago, but the development of writing systems standardized enough to be used for communication across larger stretches of space and time required the development of more advanced social institutionalization – perhaps dating back no further than just about 10,000 years (in other words, only ca. 500 generations).

For most of this time, writing was extremely limited and was only available to the most educated classes. Therefore, any ideas shared would only be written down if they passed the muster of such highly educated gatekeepers. In my humble opinion, this recurring process led to the development of something I wish to refer to as a publication bias – a “believability” of ideas that have been written down. Shortly after the invention of Gutenberg’s printing press a little over 500 years ago, the world up to that point was shaken up briefly… but that came to an end when copyright law was established and the production of large-scale printing presses became prohibitively expensive. For the past several hundred years, the publication bias has largely been reinstitutionalized, though the publishing industry became highly fragmented (from a church monopoly before 1500 to a plethora of publishing gatekeepers thereafter). The new gatekeepers were governed by many laws, and thereby it was possible to control the dissemination of information. Early modern information technologies such as telegraph, telephone, radio, television, etc. did little to change that.

What did change it was the advent of the personal computer. Desktop publishing was hardly a challenge to traditional publishing, but electronic publishing is marching forwards in leaps and bounds on its way to completely eradicating the titans of the paper era. Day after day, the cost of publishing information across the entire globe continues to new record-setting lows. It is a well-known, commonplace fact that publishing technology has now also been birthed from Pandora’s box, and that it is now nearly everywhere, cheap and easy to use… for anyone.

And therein lies the rub: The days of publishing gatekeepers are finally over. Clicking a button is not at all difficult to do… and so everyone’s doing it.

The result we need to face today is that the publication bias – the naive trust in written information – is (or at least should be) also gone, probably forever (or at least for the “foreseeable future”).

And yet likewise we see virtually on a daily basis that the publication bias is actually very far from gone. On the contrary: not only do old habits die hard, but now we have even more, new and improved, of such biases. Perhaps leading the pack is the modern brand name – completely vacuous and empty, but highly valued, exclusive and nearly impenetrable to most rational thought processes. Brands carry the weight of innumerable imaginary people, built up over years, decades if not centuries. Such colossal weight bogs the average human’s mind, and the most popular brands are revered as gods, never to be doubted or questioned. What previously had been delegated to print, today can fly as high as Coca-cola, Apple, Amazon, Facebook or Google or YouTube or untold other brands. No longer is the sky the limit, either – no, these fantastic companies will fly to the moon, Mars and far beyond into space, reaching for the stars.

Will ordinary humans ever come back down to earth? How will we ever be able to re-introduce a modicum of rationality into our species? Perhaps we should untie ourselves from our slavery to brands, brand names, megalithic monopolistic enterprises and such. Maybe we should return to ordinary communications – straight talk, free of mumbo jumbo.

Luckily, the founders of the Internet apparently did have enough foresight to foresee the potential dangers of centralized information resources. The technology at the basis of modern civilization today is actually not the problem. The problem is modern human behavior, especially the way modern humans behave in groups. We have seen this time and again throughout the 20th Century, now we must “human up” and become more reasonable.

We must learn to recognize the difference between fake and real. This is actually not as difficult as it sounds. What makes it relatively simple is when we simply recognize that the human languages we use on a daily basis are our own, and that we are free to communicate our ideas, wants and needs as we please. We don’t need no central authority to control our thoughts. We don’t need no dictator to figure out the truth. We can rely on what we understand from humans, and also that we will be understood by humans. Humans are rational beings – and that means they will rationalize their ideas, each according to their own language. Mutual understanding among humans is the primary goal we must strive for. Regular ordinary straight talk is the basis of human rationality, and it is time we recognize this fact and reestablish regular ordinary straight talk into our daily lives, our information and communication technologies and our entire media landscape.

We should not trust that Joe Rogan or William von Hippel are right. We should not feel secure that the big data algorithms of YouTube or Google will watch out for us. We need to open our own eyes for ourselves and take a good hard look at reality – because that is what matters.

One last point I wish to address is an issue that I feel could easily lead to a misunderstanding. While I argue that brand names are inadequate as symbols of trust or reliability, brand names do serve a constructive purpose, function and useful role in the modern social order. These labels and identifiers enable us to refer to individuals, individual entities, individual processes and distinct, unique phenomena we engage with and participate in on a daily basis. Therefore, they serve an integral role in our entire social fabric. Note, though, that our ability to reference such entities and phenomena has very little to do with the trustworthiness of the entities or phenomena themselves, but rather with the trustworthiness of the social order – for example, a well-functioning legal framework that forms the basis of such well-established social institutions as private property, fair trade, open communications, etc.

Meaningful information requires language, and meaningful accounting requires itemization. Bringing both of these phenomena together is a matter of dovetailing information organized via language with the accountability of big data bases. If you would like to participate in helping to make this happen, I invite you to get up and sign up with phenomenonline.com!

Posted in remediary | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Some Reflections on the Revolution in Propaganda

The Cooperative Principle in Conversation versus the Prejudice in Silence

In the following, I understand the Internet as a massive text connected by many participants conversing with one another. Parts of the text are in close connection, and the discussion can be viewed as heated insofar as the sub-texts reference each other in some way (links are merely one example of such cross-references). Other parts of the text are fairly isolated, hardly discussed, rarely (if ever) referenced. I want to argue that the former parts are “well formed” in the sense that they follow Grice (1975)’s cooperative principle, and that the latter seem to evidence a sort of prejudice (performed by the disengaged participants) — which I hope to be able to elucidate more clearly.

Before I embark on this little adventure, let me ask you to consider two somewhat complementary attitudes people commonly choose between when they are confronted with conversational situations. These are usually referred to as “feelings” — and in order to simplify, I will portray them as if they were simply logically diametrically opposed … whereas I guess most situations involve a wide variety of factors each varying in shades of gray rather than simple binary black versus white, one versus zero. Let’s just call them trust and distrust, and perhaps we can ascribe to elements of any situation as trustworthy versus distrustworthy.

Next, let me introduce another scale — ranging from uncertainty (self-doubt) to certainty (self-confidence).

Together, these two factors of prejudice (in other words: preliminary evaluations of other-trustworthiness and self-confidence) crucially impact our judgment of whether or not to engage in conversations, discussions, to voice our own opinions, whether online or offline.

As we probably all know, the world is not as simple as a reduction to two factors governing the course of all conversations. For example: How does it happen that a person comes to fall on this end or that end of either scale? No doubt a person’s identity is influenced by a wide variety of group affiliations and/or social mores, norms and similar contextual cues which push and pull them into some sort of category, whether left or right, wrong or fixed, up or down, in or out with mainstream groupings. One of the most detailed investigations of the vast complexity and multiplicity woven into the social fabric is the seminal work by Berger and Luckmann titled “The Social Construction of Reality”.

While I would probably be the first to admit the above approach is a huge oversimplification of something as complex as all of human interactions on a global scale, I do feel the time is ripe for us to admit that the way we have approached the issue thus far has been so plagued with falsehoods and downright failures, that we cannot afford ourselves to continue down this path. In an extreme “doomsday” scenario, we might face nuclear war, runaway global warming, etc. all hidden behind “fake news” propaganda spread by robots gone amok. In other words, continuing this way could be tantamount to mass suicide, annihilation of the human race, and perhaps even all life on the planet. Following Pascal, rather than asking ourselves whether there is a meaning to life, I also venture to ask whether we can afford to deny life has any meaning whatsoever — lest we be wrong.

If I am so sure that failing to act could very well lead to total annihilation, then what do I propose is required to save ourselves from our own demise?

First and foremost, I propose we give up the fantasy of a simplistic true-or-false type binary logic that usually leads to the development of “Weapons of Math Destruction”. That, in my humble opinion, would be a good first step.

What ought to follow next might be a realization that there are infinite directions any discussion might lead (rather than a simplistic “pro” vs. “contra”). I could echo Wittgenstein’s insight that the limits of directions are the limits of our language — and in this age of devotion to ones and zeros, we can perhaps find some solace in the notion of a vocabulary of more than just two cases.

Once we have tested the waters and begun to move forewards toward the vast horizons available to us, we may begin to understand the vast multi-dimensionality of reality — for example including happy events, sad events, dull events, exciting events and many many more possibilities. Some phenomena may be closely linked, other factors may be mutually orthogonal in a wide variety of different ways. Most will probably be neither diametrically opposed nor completely aligned — the interconnections will usually be interwoven in varying degrees, and the resulting complexity will be difficult to grasp simply. Slowly but surely we will again become familiar with the notion of “subject expertise”, which in our current era of brute force machinistic algorithms has become so direly neglected.

If all goes well, we might be able to start wondering again, to experience amazement, to become dazzled with the precious secrets of life and living, to cherish the mysterious and puzzling evidences of fleeting existence, and so on.

Tags:
propaganda, rational media,
language, natural language,
algorithm, algorithms, algorithmic,
big data, data, research, science,
quantitative, qualitative,
AI, artificial intelligence,
Posted in remediary | Tagged , , , , , , , , , , , , , , | Comments Off on The Cooperative Principle in Conversation versus the Prejudice in Silence

WordPress Search Engine Optimization

I asked MA.TT a question – of course it’s complicated, but it definitely has something to do with the three-letter-acronym (TLA) called SEO:

As I indicated in my question, this has nothing to do with Google and/or similar proprietary (“secret”) algorithms. I was talking about WordPress as a search engine.

IMO Matt’s response / answer was excellent. It underscores how challenging such a project might be. Yet Matt also emphasized (during the interview with Om) that text will continue to be of central importance to the way the World-Wide Web works (and will continue to work). As an addendum to Matt’s insight, let me note that any kind of artificial intelligence initiatives will always be a matter of pattern recognition, and the patterns that they attempt to recognize will always be text, a written representation of something (e.g. spoken language), etc.

Since my time for asking such a question was limited, I couldn’t go into many details. Even here, I do not want to bore you with the pros and cons of hitchhiking along on other people’s websites. As I have written in previous posts, the foundations for quite satisfactory information retrieval (aka “search”) are already present on the so-called (WP) PLATFORM.

At this point, I would like to try to elucidate at least one part of what I was talking about when I mentioned that an information-retrieval system need not necessarily be a one-size fits-all solution. People who have been following my writing for a long time know that this is a “pet peave” of mine, which I mention time and again. Note also that during Om Malik’s interview with Matt Mullenweg, the two also discussed McDonalds (TM). I find the use of brand names such as McDonalds or Google or Tupperware or whatever intriguing – is this a good or a bad thing? IDK… – but I digress.

The point I want to underscore especially emphatically here is that WordPress isn’t simply a “software” or an “app”. It is also a community which also involves a lot of people who have a great deal of natural intelligence. There are many points of view. There are many approaches. And just as there are many roads to Rome, so too there are many possible solutions to information retrieval (“search”) technology which are possible, viable, etc.

Let me give a couple examples beyond the ones I mentioned when I asked Matt the question in Paris (at the 2017 WordCamp Europe conference). The examples I mentioned of how “every website is a search engine” both came from the wordpress.com website. There are certainly many more examples possible from this website, but I chose to highlight discover.wordpress.com and wordpress.com/tags as premier examples. In my opinion, these two search options show two different kinds / levels of community engagement.

Likewise, there are also many search capabilities available to self-hosted wordpress sites. The most straightforward of these is undoubtedly the search widget (a “search box” in which the search text is entered and then searched). This is a very simple algorithm, and it is primarily useful for “known item” searches – for example, if you already know the title of a post or a string of words inside the post content. Normally, this search box does not search many other fields, such as the the tag field or comments. However, a site’s registered users can search these fields via the “backend” portion of the WordPress software. In this sense, each site has more search functions (and functionality) available to registered members… and therefore also offers higher levels of capabilities to more engaged users.

Note also that such different levels of capabilities are also part and parcel of the distinction between tags and categories (with respect to the primary function of information storage – which is the basis for later information retrieval capabilities): while in a standard WordPress implementation many users are able to create tags for posts, only a rather limited set of users are capable of creating categories for posts. Of course expert site administrators can configure such settings (which may be good, but having reliable standards also makes learning how to use WordPress easier for new users).

In my opinion, community engagement is also probably also the crux of WordPress search engine optimization across sites. For example, it might be important to distinguish between different meanings for the same string – e.g. “development” might mean very different things in different settings / contexts (such as with respect to software, economics and psychology). I think it might make a lot of sense for WordPress to provide support for the development (no pun intended 😉 ) of sub-communities within the greater WordPress community – and thereby to enable people to share, exchange and review each other’s ideas, to set these ideas into their appropriate contexts, etc. Indeed, there is a long tradition of abstracting and indexing in scientific literature – and learning from decades (if not even centuries) of experience and insights might be a very good thing to do.

One recurring theme I heard repeated throughout the WordCamp Europe conference in Paris time and again was the notion of how WordPress functions as a community of engaged people. This aspect of community engagement is definitely a very strong advantage of WordPress with respect to search engine optimization. Another is the very strong foundation of “open source” ethics – which I also described in my previous post.

Posted in impost | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on WordPress Search Engine Optimization

Why the Scientific Method and Open Source are Some of the Best Things Since Sunlight

About 100 years ago, some guy named Louis Brandeis apparently said something like:

Sunlight is the best disinfectant.

Mr. Brandeis was speaking metaphorically (his statement was made in the context of a book titled “Other People’s Money and How the Bankers Use it”). There are in fact other phenomena which are also social disinfectants from corruption, exploitation, manipulative propaganda and similar social ills. Some of these go back just a few years, others many centuries or even millennia.

Since open source software is a technology that requires computer hardware, it is a very new phenomenon. The scientific method, which has a similar foundation rooted in widely available publications of observations, is several centuries old – at least. Indeed, prominent scientists and philosophers alike have put public discourse at the center of the marketplace for ideas since time immemorial. We can trace the history of such public methodologies as fundamental to the development of civilizations worldwide – for example the technologies of writing and written languages are several thousands of years old. Spoken language is probably much older than that, with some estimates dating the origins of such “natural” languages to perhaps about 75000 years ago.

Note that all languages are not individual, but rather (at least) social phenomena – and perhaps they are even evolutionary developments that are in some respects independent of the societies and civilizations that use language (one obvious example being “genetic material”, such as DNA). The wink of an eye, or the tear rolling down a cheek require an agreement with respect to their meaning. If and when such agreement exists, only then can we speak of a common language, and such a common language acts as an important construct for the development of community and communal engagement.

Be that as it may, these are by no means the only foundations of modern civilization… – but they are at least central pillars. We should not allow any top-secret agencies or secretive corporate interests to chip away at the progress we have made as the “premiere” intelligent species. We should not allow impostors or terrorists to hijack our ship – mainly because it’s the only one we have. We must defend free speech and sunlight, lest some “get rich quick” scheme con-artists should try to pull the wool over our eyes.

Posted in impost | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Why the Scientific Method and Open Source are Some of the Best Things Since Sunlight

Reading, Writing + Communications

Five centuries ago (more or less, depending on when you actually read this), Martin Luther nailed his famous 95 Theses to the church door. In the weeks, months and years that followed, one of the most influential publications of the Protestant Reformation was propagated across Europe.

Yet, in my opinion, perhaps the most influential contribution Martin Luther made to western civilization was something quite different. He laid the foundation for literacy in the western world.

Since Johannes Gutenburg had developed a printing press with movable type, the missing piece to a literate society was no longer simply a matter of the limited production of reading material. Luther recognized that in order for printing presses to improve the lives of people, people would need to acquire skills they had never needed before. Though he campaigned strongly for advances in literacy, and though he did help to start such advances, most of the great advances in literacy didn’t actually happen until several centuries later.

I feel as though I am also in a situation quite similar to Martin Luther’s situation. Whereas for Luther it was mainly about “reading literacy”, for me it is also about “writing literacy” and also “communicative literacy”. Writing ought to be self-explanatory. What I mean by “communicative literacy” is, I guess, something like knowing that when asking a friend to meet for coffee in half an hour, it may be best to use a telephone call, maybe to send an SMS / instant message, but that writing an email would probably be the wrong technology, and sending snail-mail or writing a book would be completely out of the question. All of these technologies involve both reading and writing, but only some of them are adequate to the task at hand.

Few people are aware that many technologies they use on a daily basis involve writing (and thereby data being recorded). When people press someone’s telephone number into a phone, they usually don’t consider that act to be writing per se. Likewise, most people consider the sound that comes out of a telephone speaker to be the other person’s actual voice rather than a reconstruction of the audio signal that was recorded via the other person’s telephone microphone. And yet again, when someone moves a computer’s cursor using a “mouse” , or when they click on a button or link online, most people do not consider such actions to be writing and/or recording data. Indeed, few people are even aware that a mouse is normally referred to as an “input device” (as is a keyboard).

Much in the same way that the vast majority of Europe’s population was illiterate during Luther’s times, today the vast majority of populations worldwide are by and large oblivious with respect to many “information and communications technology” (ICT). And even though I have already written a lot, all of this is probably still less than just the proverbial “tip of the iceberg”.

Most people still don’t know the difference between “machine-readable” data and “non-machine-readable” data, most people still do not understand the difference between quantitative and qualitative data, and most people still even to this day cannot tell how to identify who is responsible for the content that gets published online.

Most adults in developed countries today learned some basic fundamentals in school about how the publishing industry in the world they were growing up in worked. In contrast, kids today learn very little about how publishing works in the world they are growing up in now. Ask any teenager whether the device they have in their pants is currently publishing anything online (or “via the Internet”), and most of them would probably just look baffled.

We can do better, and we must do better!

Posted in impost | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Reading, Writing + Communications