The Chameleons Among Us: ‘Thought Libertarians.’

The Chameleons Among Us: ‘Thought Libertarians.’

Trying to please all, the libertarian thought pattern exists successfully in divisive times, a chameleon.

Ryan Derenberger is a freelance journalist, a Journalism and AP English teacher at Whitman HS in Bethesda, MD, and the founder of 'The Idea Sift.'

August 4th, 2020 at 5:42 pm EDT

Thought Libertarians learned one lesson from kindergarten really, really well: “Everyone is entitled to their opinion.” Those infatuated with the lesson typically pull the plug on it just before it hits KKK territory. Anything spatially earlier than explicit racism, and the brain gets all squeamish: “No, they’re entitled to that opinion. No, that’s not for us to judge.”

“No, we can’t prove that was a dog-whistle, so it’s best not to say anything at all. Didn’t you learn? Everyone is entitled to their opinion.”

Trying to please all, the libertarian thought pattern exists successfully in divisive times, a chameleon. These are divisive times. But will the strategy always reward its host with the greatest revenue into the 2030s, the 40s and beyond?

There’s an “almost” qualifier already beating around somewhere in these minds, existing to maximize yield and differentiate, because Thought Libertarians already know that their holiest of holy mantras, Kindergarten strong, is in fact not as absolute as they report it to be. So there they wait, manning neurochemical IVs, ready to euthanize their own libertarianism should the crowd around them demand it — a paradox. “Black people are what? Oh, no. I’m pretty sure that opinion’s wrong. Yeah, some opinions are wrong.”

Curious. Such mind pre-meds are willing to qualify on occasion, but suspiciously absent in their thoughts is a related qualifier, a real gimme upon graduation: “The opinions that get results are typically evidence-based.” 

Consider it a new strategy, that of the Thought Scientist.

We need not fret over the current state of the brains’ battlefields, though acceleration would be ideal. The need for the thought “Everyone is entitled to their opinion” will wane as sure as we each were born, and traditionally libertarian-styled thinkers will feel safer and safer drawing lines further and further towards a complete acceptance of accurate and evidence-based assessments of our shared reality, finally cutting off some of their historical circles in order to gain admittance in the largest ever assembled, that of the “inclusivity” crowd. King’s dream.

This shift towards the nourishing land of proof occurs naturally as survivalist needs of a species’ group fade and individuals are actually capable of saying, “No, I won’t stand for x” without jeopardizing their social status or a sufficient number of allies to warrant comfort and bring real safety. Humans are not unique in this potential; we’ve observed the same utopia engine in other mammals.

I offer “utopia” as a destination, a place not of infinite life, but of comfort, exploration and space.

At what else would we aim when working together with bodies of evidence and trust? We all have the same goals, and the means to achieve them are starting to look more and more aligned.

Perched from their life labs, they may at first find their old, immediate friend groups inhospitable to their new symbiotic thought mutation regarding evidence, mutual exclusivity finally clear.

The brain’s neural circuits draw social lines almost literally, using spatial reasoning to articulate how closely we identify with others — friends, family, coworkers, strangers — while we each rest at a 0,0 coordinate.

In a February Scientific American feature “In Search of the Brain’s Social Road Maps,” researchers Matthew Schafer and Daniela Schiller explore the body of research on such maps as they populate in our brains. Like Facebook friend lists visualized and ordered, the maps equip you with actionable levels of trust and reliance of others in each moment of your daily ventures. For those of you of a certain age, think MySpace Top ∞, and the stilted changes in interactions that would follow as you moved past your Top 8 and into the weeds.

Schafer and Schiller write, “Human relationships can be conceived of as geometric coordinates in social space that are defined by the dimensions of hierarchy and affiliation. Work in our lab has explored these ideas in recent years. Our results suggest that, as with other spaces, the hippocampus organizes social information into a maplike format.”

You’ve heard of “social climbers,” of “glad handers.” Evidence be damned — they will live, love and laugh with you. They’ll also do the same with your sworn enemy.

They navigate their maps differently than most.

Observe. Watch. You’ll know one of these chameleons when you see them because they seem to be able to be friends with anyone along any spectrum. They confuse this for justice, befriending the jailer and the jailed. Their graphs look like chaos from a sky view.

Crowded are the quadrants that surround them in their cogna-social maps — a bloated Facebook account visualized — and mutually exclusive are the points peopled in each Thought Libertarian’s scatter graph, like funhouse reflections over axes.

Less willing to compromise, gel or blend-in, humans who instead pull the rip cord of what amounts to an evidence-based utopia engine rev more defined lines and demand evidence sooner rather than later — now, even — and in that cord pull, their new ideas venture implicitly towards the scientific method. They are Thought Scientists in their element.

Perched from their life labs, they may at first find their old, immediate friend groups inhospitable to their new symbiotic thought mutation regarding evidence, mutual exclusivity finally clear.

But taking that loss, these Thought Scientists need not fret: a larger group of similarly-oriented minds is only a channel or a tweet away, a kind of “last tribe” hidden in a z-axis but always accessible at least digitally. Theirs is a tribe that invites all without discrimination, but it bites back when the invitation is declined, angry at the unwillingness to tune the utopia engines and compete in friendly races towards the finish line. The anger, like most thoughts of theirs, has an unabashed qualifier: they’ll forgive at any time if a defector finally wants to race. Tit for Tat, ad infinitum.

Thought Scientists begin to accept how birth at any point in human history as opposed to at its very start, thrusts one into inequities of degrees, from zero to 180 — like being invited to a board game mid-round and either getting dealt in wholly, partially or not at all.

Either way, kick, you’re playing.

If that sounds like an accurate description of reality, it’s because it is.

Which reminds me: those increasingly comfortable in evidence-based rhythms will feel less and less a need to be humble, or a need to nurse a long-debilitating imposter syndrome.

By the conclusion of this inevitable inflection point in our collective firmware, the group in which these tweaked, more evidence-based minds find themselves will be plenty big, no need to draw the line deep into Mad-Max-world for some compromising sense of comfort in numbers.

We’ll be able to demand evidence without losing friends. Wow, will we ever be productive, then. In the meantime, we wait, burdened by our patterns of the past. Tick, tock.

In our species’ immediate future, there will likely be no mass exorcism of tribalism. Tribal habit is biological as much as it cultural, and it has been helpful for millennia. It still will be helpful, too, in creating a tribe of all humans who fight against ideas that spoil their society as opposed to fighting individuals based on race, religion, etc. Hence, the eternal open invitation that this last tribe (on Earth, we happen to call them “progressives”) offers to literally anyone, bragging about “inclusivity” to the chagrin of those who, in message boards and truncated YouTube clips, love to point to bite marks shaped like the left’s teeth.

Libertarians champion a lack of bite, a purported lack of tribalism, and they call progressives “hypocrites” for getting angry when someone doesn’t share progressive thinking.

The unattended, ajar door to this strawmanning of the progressive animal’s behavior in inviting all to their big party is also emblematic of a rhetorical failure in progressives’ messaging, where they too report a supposed leap past tribalism, which they demonstrably have not done, nor need do to define their movement effectively — but conservatives seize the hypocrisy quickly as justification for renewing traditionalism, while libertarians use it to justify their all-sides-except-KKK-ism.

You’ll know one of these chameleons when you see them because they seem to be able to be friends with anyone along any spectrum. They confuse this for justice, befriending the jailer and the jailed.

Libertarian thought patterns express to degrees in any given individual. Change rarely happens all at once, and there are overlapping patterns that like gravity pull and push this one, too. Dedication to a mostly libertarian family, for instance, who believe in “pull yourselves up by your own bootstraps” thinking, may cause a participating mind to weigh even evidence that falls into their laps differently than they would have had they been from a less politically defined family, a progressive family, or even in a situation where they endured a falling out with their family. Again, gravity.

The proof of the thought-libertarians game-theory, strategic value is, again, in their varied relationships, their keeping moderately close to different friend groups regardless of each group’s or individual’s politics or ethic, shirking the mutual exclusivity inherent to actual ethic and blending as appropriate. Put more negatively, thought-libertarians lack spine and importantly believe it to be detrimental to their social standing and the universality of their social currency. It’s an intentional de-spining, you know, for friends.

To further examine these concepts in action, let’s shift briefly to a Platonic Q&A format, a valuable presentation in unpacking complexity. What follows was an actual exchange between a friend of mine and myself, with some expansion for clarity. 

So I mostly get you, but what is the exact opposite of “everyone is entitled to their opinion,” then? Where do we go from here?

The immediate answer is the already-existing, competing thought pattern “The opinions that get results are typically evidence-based,” with “results” optimistically defined here as any type of group prosperity — economic, mental health, artistic.

However, we must unpack more if we’re to chip our way further towards truth; what appears here as opposite is more a growth, a scaffolding revealed and filled.

Consider the efficacy of an evidence-based ethos in the year 1020. Do you think the average villager had the capacity to complete double-blind studies and peer reviews? To finance intricate explorations, or run simulations? To spend an evening researching at libraries that only existed for royalty? Before the printing press, even?

In 1020, you couldn’t wait for evidence to act; you had to use your gut and your instincts almost uniquely. You trusted those who looked like you and lived nearest you. You didn’t have a choice. For the most part, given the simple tasks of the time, gut, instinct and local trust worked. They kept you hating your national neighbors to the north, south, east and west because damnit, “they just might invade and take our resources.”

Evidence-based aiming would have been deathly slow, literally.

Now, consider the efficacy of an evidence-based ethos in 2020. Evidenced opinions put into action work. They’re also possible to Google in a minute’s time. Evidenced opinions approach predictability in a resultant action (regardless of whether an act falls into traditional categories of “good” or “evil,” by the way.)

Unevidenced ones by contrast yield inconsistent, but still net-positive results, manifesting a large “functional gap” between the aim of the opinionated and their inconsistently realized achievement. These opinions in action work about as good as they did in 1020 — not altogether bad, but not maximized.

There exists a similar type of “functional gap” with scapegoating right now, where the thought pattern itself just doesn’t yield great returns. More of us, I imagine, are willing to recognize scapegoatings’ functional gap than are willing to do the same with the Thought Libertarians’. Scapegoating occurs in our wild and it may yield some positive results, but it will not yield the heightened value of realistic assessments, where minds differentiate and assign blame in degrees rather than binaries.

The new, “opposite” thought then is more an upgrade than a replacement, a firmware for the times.

But is there really a “correct” firmware, a “correct” way to interpret our lives? With things like bigotry, racism, it seems pretty clear cut, I think. It starts getting way more grey when you start asking questions like, “Is capitalism a good thing?” or “Is gentrification?”

Agreed on the grays, of course, because they demand a great deal of evidence, which we are slowly amassing through research and through trial and error around the world. Double-blinds. Peer reviews.

I disagree that there won’t be a narrowing preponderance of evidence pooling in some actionable corners more so than in others. To identify and measure that pooling, I would defer mainly to economists on capitalism, for instance, because economics is their field. This take is actually and simply called “expertism.”

I would also not necessarily defer to only those economists with many years in the field over those recently graduated. Consider how younger doctors enjoy lower death rates in their patients than do senior ones. New doctors are recently educated and updated, less biologically fatigued than their soon-to-be retirees contemporaries, largely still curious and willing to read new literature in their field, and unjaded by administrivia and what will become necessary shortcuts they will begin to take later in their careers to make their practices overall successful. Give me a well-rested doctor any day who at least believes she can burn the candle at both ends, reading, observing extra-closely, over one who throws in towels early, rested or not, in a necessary shielding their brains have created to protect them from the burden of total responsibility as they live to see their most valued patients pass away. Calloused synapses.

There are thousands of exceptions, I’m sure. A young doctor on a bad day is potentially more dangerous than a veteran dealing with the same. Experience may provide stablitiy and steadiness, if not Holmesian instinct.

The preponderance of evidence suggests, however, that overall, I and any other would-be inquisitor should seek somewhat youthful and energetic experts if I want faster and more permanent results.

One day even soon, the evidence could semi-permanently change across the board, in which case my choices should, too. Perhaps medical school becomes so expensive in the next few years, the debt so debilitating, that no amount of joyful optimism in new physicians could combat the reality of negative-balance checking accounts and debt that lingers in their attention straight into their exam rooms.

What I just described could happen — and then we’d need a new study, wouldn’t we? That’s the thing about evidence-based thinking: it takes continual work and degrees of trust in your fellow humans, your fellow explorers who catalog the data. It takes mindfulness, cognitive vigilance, resilience and time, privileges of a modern era orders of magnitude more complexified than 1020.

To return to your question in sum, this isn’t about “correct” or “incorrect,” then — it’s about results, regardless of ethical intention. My spreading the meme that “evidence-based opinions have added value in their results” could actually empower those with malice in their hearts to be more effective in their prejudice, I admit. I also believe, however, that the times call for such a meme elevated, weaponized as it may become in a minority, for its use in the collective good will likely outweigh any abuse given the sheer number of humans already employing it proudly and putting it to ostensibly good use.

It’s growing symbiotic in the species, a natural phenomenon I’d like to accelerate.

Okay, but do you realize you’re making judgments about reality itself? Who’s to say what interpretation is real is or isn’t real, or what result is or isn’t good? There will be so many exceptions. Studies aren’t the end-all be-all of existence. We simply can’t know everything.

Consider calls for equity that become drowned by the amorphous pessimism of signs like, “It’s all a mess anyway.” The Cormac McCarthy philosophical thriller The Counselor, comes to mind, Brad Pitt’s sagacious cowboy prodding Michael Fassbender dressed in a southern accent. Pitt offers some momentary wisdom as if it’s permanent: “I’ve pretty much seen it all, Counselor, and it’s all sh*t.”

I fear that in your question is an instinct for routine and a habit of pumping brakes on hard truths. Let me pump them for you: No one is asking you to reason the unreasonable, or believe the unbelievable.

And if your concern about pursuing knowledge borders on the spiritual, know that the Thought Scientist’s mindset is not one contrary to faith; quite the opposite. A Thought Scientist would encourage the full faith that “we do not know what’s beyond our senses or the sensors of our instruments” — and if you are a God-believing person, to put it in Western terms, they actually avoid the sin of gnostic pride as if it’s the plague, of assuming that they know God’s hand.

Instead, all I would ask of you is to commit to doing permanently and consciously what you already do in 99% of your actions and thoughts: evidence them as best you can, methodically, and leave the rest for prayer, meditation, for God, for metaphysics, for whomever or whatever you may believe guides as an invisible force if at all.

We are each but single pixels on a 4k screen experiencing pale or more rarely dramatic color shifts. And so we scream to other pixels that our shift was the most accurate. I personally am wary of that instinct because I’ve seen it backfire so many times and create the exact opposite of what was intended. You will not hear me utter otherwise or claim total truth.

So let’s stick to what we can agree on, what we see with consistency instead, and just see where that takes us.

I understand how some opinions might get us more of what we want — but could we really ever use a strong word like “invalid” to describe others?

On the question and definition of validity, we must venture into the field of Logic, examples in tow.

Your brain, all brains, use “as best as they can determine” validity every day to justify what movements your body should make, including those of your typing fingers, and your wording lips. In a Platonic (that is, ideal) form, your brain takes premises derived from its senses and draws conclusions that follow from those premises to move you closer to your brain’s goals. Notice I’m not assuming a central actor or ego, though feel free to.

The decision-making that happens every moment accelerates your mind’s implicit and explicit goaling, bringing about that which you want and doing so hopefully in an accelerated fashion. 

Recall Geometry class briefly. Logical “syllogisms” are synonymous with “proofs” in Geometry: two premises or more, and a conclusion that follows. If the conclusion doesn’t follow, we have a term for that: “invalidity,” specifically within the proof’s internal logic. Logic has also given us a term “soundness” as a separate metric, a function of whether or not the premises are accurate to reality. 

Two examples:

P1. All quadrilaterals have four sides. 
P2. A square has four sides. 
A square is a quadrilateral. 

sound: yes | valid: yes 

P1. All circles have four sides. 
P2. A square has four sides. 
A square is a circle. 

sound: no | valid: yes

So validity has a set definition in modern acadaemia. A yay or nay considering a thought structure’s basic efficacy and reality is a calculation our brains complete by the second, and we complete it more consciously and arguably more efficiently when getting out of the way of our brains and letting them calculate reflectively.

If in your reflections, however, you begin with the conclusion and seek evidence post hoc, you are less likely to remain within a valid syllogism, and in turn, less likely to goal. Validity and soundness remain separate metrics even with this inversion.

Time itself brings into existence movement, whether forced or willing, and in that necessity appears either a priori logic or a posteriori thought constructions, one thought to the next. Each thought acts as a conclusion, as evidence or just as a kind of “scratch,” a general exploration without labeling anything either “P” for premise or “∴” to symbolize “therefore,” for a conclusion.

A priori snippets of our daily thought sequences place a conclusion first and then seek evidence to justify it, confirmation bias often at play. 

∴ “I’m sure meritocracy is real and people are poor because it’s their fault.” 
P1.”I know a guy who is just lazy and that’s why he’s poor.” 
P2. “And I can’t afford higher taxes to pay for social safety nets anyway.”

sound: no | classically valid: no | valid to the individual: yes

This last example of an a priori snippet is strikingly different from the first two syllogisms — even so from the unsound one, which at least attempts actual logic.

What they all do, what’s common, is a shuffling of the same variables, the same types of thought players. The variables’ values alter as do their order. That’s time. That’s just life with a “before and after” and a consciousness to do the labeling.

It’s not some kind of unfounded leap to label snippets of our thought narratives as “valid” and “sound” or not, though there are limits to language itself that materialize when we attempt some kind of complete assessment of reality outside of our daily narratives and interactions. Conclusions about neither the 4k screen nor its pixels, rather the projector behind them, we should engage with caution.

There are ways to reverse engineer how the projector works. We have researchers who brilliantly devise the relevant experiments who will slowly explain to us their close observations about pixel patterning and what they can reliably infer. Depending on a person’s religion, we also have other local experts like pastors, or individual-based spiritual exploration to speak of the projector, your own purported-expertism.

In the meantime, as philosophical skeptic Ludwig Wittgenstein once proclaimed, “Whereof one cannot speak, thereof one must be silent.” For our more faith-based and Judeo-Christian readers, we might say “Now faith is the assurance of things hoped for, the conviction of things not seen” (Hebrews 11:1).

So you’re right to question my diction with vigilance. Language has limits, as I’ve noted before on TIS. We likely cannot, however, permanently exist in the spacy, amorphous alternative. As psychonaut Terrance McKenna once put it about our more grounding instincts, “We need an ego, yes. That’s so that if you take somebody to dinner, you know whose mouth to put food in.”

Famously, the skeptic Pyrhho (yes, that Pyrhho) was so doubtful of his world and language itself which he saw as deceptive, that his disciples had to walk around babysitting him to make sure he didn’t walk right off a cliff. He had decided that it was equally as likely that his step off could result in him suddenly walking on air as it could a violent tumble to his death.

Let’s not be so foolish. To our recurring point, then, the thought “All opinions but the KKK’s are valid” or “Everyone is entitled to their opinion” consistently widens a functional gap between what you want and the fastest, directioned steps to get there in 2020. Each thought attempts a goal of self or social prosperity, but renders steps in slower-motion than those taken by the utopia engine that is evidence-based action and group goaling. Thought Libertarianism ignores cognitive bias, the subtle impact of old habits like racism, logical fallacies in everyday conventions of thought, widespread lack of understanding about Logic among quite literally billions of other gravity-like pulls that will bog down a brain’s processing power.

Nowhere is the mutual exclusivity your question implies. One can understand broader happenstance and the internal validity of different minds’ processing while simultaneously acknowledging the obvious: more evidence-based and clear-thinking, expert-based opinions will get us “there,” wherever “there” is, faster.

The more apt question, then, is not “Which of the opposing thoughts will yield more successes?” but rather, “How does one empower other similarly bound humans to quit their dependence on the original?” Let this article represent one attempt at the latter, but I assure you, it will fail for a majority. The memes offered will need translation and code-switching if they’re to spread in a timely fashion to the masses.

We are in totality better, by any definition of the term, for trusting an aggregate preponderance of fresh experts, who enjoy winning records as-yet, in the assessment of slices of reality in which they are indeed experts — and we are better also when we do the same ourselves in those fields in which we are the fresh experts, be they local ones like the drama of our own social circles, or broad ones we’ve mastered or synthesized anew, PhDs on our walls or thousands of intentioned practice hours completed.

A symbiotic relationship with the relevant mindset, literally called “expertism,” will reduce what I earlier called the “functional gap” between where we are and the just society we want to manifest, allowing the social thumb depressing a slow-motion button on our remotes to finally ease. Who and why we deem any group or individual “expert” will require continued, mindful vigilance.

Lastly, damning are the impetuses for addicted, sleepy self-trust, an overreliance and haughty assumption on and about one’s own expertise and the scope of what amounts to one pixel. We framed these individualistic habits as valuable for centuries, as fast-forward buttons, because indeed they were in the times of old. There was no connective digital tissue to aggregate expertism. There was only you, your immediate family, your town, your country, your tribe. What intellectual fields did exist were wildly simplistic and ambiguous. For centuries we perused literal alchemy, now only an idiom for “wishful experimenting.”

So now that such connective tissue does present, and the internet pixels the full 4k screen that is Earth and humanity, then yeah, I say we thank our old ideas for their service and pry off what started as a fast-forward button and now is nothing more than standard, boring, slow motion. Thought Libertarians, your favorite idea has an expiration date.

And all of the above is my evidence.


“Everyone is entitled to their opinion,” and “All opinions are equally valid.” The only exceptions are the extremes on either end of a spectrum.

Evidence-based opinions that inform our actions efficiently plan for our getting what we want sooner, as both individuals and a collective, and our brains already operate on these directives throughout most of our waking lives. Expertism has value, but not all experts arrive in each moment equally capable. As with capacity, validity and soundness, too, are distinct metrics, and even an internally valid opinion may be demonstrably ridiculous or inverted in its structure. Our adherence to old opinions and fictionalized self-expertism may root in our citing past win-loss records, a veritable team of thoughts whose jerseys are never retired despite their age — like still fielding a championship team from decades ago and expecting them to keep winning, even as they take the court with their walkers.

Thought chameleons of the world, it’s time to spark a draft.

IDEAS SIFTED: Libertarianism, tribalism, strawmanning, “social road maps,” patient death rates, The Counselor, expertism, 4k screens, Logic, validity vs. soundness, Ludwig Wittgenstein, Terrance McKenna, Pyrrho, alchemy and just now, basketball drafts.

Ryan Derenberger is a freelance journalist, a Journalism and AP English teacher at Whitman HS in Bethesda, MD, and the founder of 'The Idea Sift.'