October 24, 2016

A look through the flawed lens Oct. 24


Articles I see that please or excite

Proper use of humility: Daniel Dewey owns up to missing his giving pledge on the EA forum, and shares his plans for how to do better.

Conspiring with the Enemy and Cooperating in Warfare by Yvonne Chiu. Some faith in humanity restored. Not all of it, but some.

Andrew Gelman posts about whether it's fair to use bayesian reasoning to convict someone of a crime. This is a conversation I want to see happen, luckily there is a comments section to browse!

Siderea talks about what Trump's video tape does and notably doesn't mean. I have gotten extremely sick of 'my' people using Trump's excesses as an excuse to dismiss all notion of truth-seeking and integrity.
I think some people on the Right are hearing that video very, very differently than people on the Left do. And I think it important for the Left that they understand the various ways the Right is taking this. It is a crash course in feminist history, and an orientation to something important that is going on right now.
Xenosystems makes a similar point.

A good angle on the parallel moralities of America's political parties made at Meaningness.

David Henderson at Econlog clarifies that "Person X did this bad thing" and variations are not overall judgments about the person. If you try to argue based on a false statement, then truth-seekers are going to correct you even if (especially if) they agree with you.

I found the precurser to Uriel of Unsong. It's really fascinating seeing how fictional rationalist universes have definite meta-causality.

Miri at BruteReason deconstructs jealousy. It's a good read for how to deal with a difficult, complex emotion.

Sarah Constantin creates a new metaphor-god called Ra, to join the pantheon of Moloch and Azathoth. It's approximately the human fault of valuing smooth ambiguity and vagueness over flawed detail and concreteness. I write these flawed lens posts mainly to counteract the thing Sarah is pointing out here, so I think it's a pretty neat metaphor to have.


Frowny faces on activities that do not tend towards truth or fulfillment.

"The ingroup has a rich, varied and deep decision making process. The outgroup are simple creatures driven by base instincts." - ContentOfMedia

"creating an ideology incompatible with any modern political structure so that you can pity yourself as the underdog indefinately" - FatherOfNun

Laughing nervously at weird sun twitter posts - Me

Robin Hanson writes about the Smart Sincere Contrarian Trap; namely, the idea that smart sincere people champion ideas that lack fashionableness more often, and so high-status movers find them "Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member". I spit on that. (The trap, not this post or smart sincere contrarians)


Open Questions

Scott Sumner at Econlog has such pretty graphs and I don't understand them any more than the last time they showed up. They look so concise and tidy... way too concise, I am majorly missing some context. Can you explain what all of the words and arrows refer to?

I am disagreeing pretty hard with this TheMoneyIllusion post (also Scott Sumner), it feels very dismissive of what I think are real concerns for people. Increasingly I find that Trump's aims point to very critical cultural issues that he is absolutely the wrong person to solve those problems. Admittedly, a lot of that results from numerous bad arguments from Dems (same poster talking about this, ironically) inoculating me against the better forms of the arguments. I really, really do not want the grievances of traditional, rural America to fade back into ridicule and obscurity with the fall of their ill-chosen champion. Do you predict post-election riots? What measures are in place to prevent them?
Andrew Sullivan, in nymag, has a long essay on how the information age kind of crushes our souls. I think we can mitigate the harm while keeping the benefits, but I don't see a lot of coordinated effort to do so. Rationalists pay a lot of lip service to binding ourselves to reality, but we definitely spring from an in-the-air intellectual demographic. What, besides meditation, do you do to stay connected physically and emotionally?

Jeff Kaufmann writes about the selection effect that goes on when some organizations aim for transparency full-time and others only sometimes. When one organization volunteers only good information, organizations that dutifully report the bad as well look comparatively awful. So, How completely would we have to reconstruct the education system to provide a firm base of statistical understanding and information?

If you think we're wrong or unhelpful, or if you aren't sure, tell us!

October 17, 2016

A look through the flawed lens Oct. 17


Articles I see that please or excite

Embodied cognition. Not in the usual sense of "clench your fists to increase willpower", but using your body and environment as part of your computative toolset.
This gives me a nice framework to mash up idle thoughts about PCK-seeking* and database design; I think we can increase ability in a lot of domains simply by picking mental representation of concepts that are easier to work with in relevant ways. Good infographics, for example, take advantage of our very powerful visual systems to convey a lot of information simply and concretely. I do not think it is a coincidence that the descriptions of mathematical savant ability I've heard include an element of synesthesia.
*Nod to Valentine Smith of CFAR

Kudos to Unit of Caring for promoting ethical norms.  Let's not go down the road where we "make it dangerous to your continued wellbeing and ability to earn a living to try to persuade people of your ideas". I'm also fond of this post on giving room for people (even lizard-people) to change their minds.

Also regarding Unit of Caring, I JUST DISCOVERED SHE'S LINTAMANDE. Her Patreon rewards sound so tantalyzing all of a sudden... And seeing that she's on AO3, it makes sense why her glowfic Arda had those really freaky overlaps with that modern Silmarill story I'd read once upon a time.

On Thing of Things, Ozy has posted an Intellectual Turing Test try on 1-2-3 different stances toward social justice. I could stand to see more of this and double-crux as normal activities.

This article on providing an exit-ramp for people who are dead set on ideas you think are catastrophically wrong (as they think Trump supporters are). Surprise—the people you disagree with are people! They do not respond well to being cornered and humiliated. They do respond to patience and information and validation. The article's given advice feels like a short summary of Carnegie's "How to Win Friends & Influence People".


Frowny faces on activities that do not tend towards truth or fulfillment.

I anticipate the world burning before my eyes, and run off to grab a bowl of popcorn. That is probably not the most helpful response I could have to critical and complex problems. Many things seems distant and darkly amusing. Especially the idea that the outcome of this election might seriously matter to America's short-term non-hellishness and long-term existence as a stable country. We could drag who knows how much of the global community down with us, and I'm not convinced that either candidate can do anything useful about it. Everything our predecessors ever worked to accomplish could fall apart and it's hilarious. Also funny: ISIS has bomb drones.

There is no Nobel Prize for ecology or geology or climate science. A bunch of fields just get stuffed into the peace prize #NotAllScience

This Guaranteed Jobs Proposal article. I basically read this proposal as saying 'solve the jobs problem by solving the jobs problem with the government'. This fails at the virtue of simplicity! Also, I see absolutely nothing in this proposal that seems aware of disability as a potential roadblock. I pay close attention to when my shoulder Social Justice Warrior and shoulder Libertarian agree on anything.

Robin Hanson posts on Overcoming Bias about the backlash he gets for focusing on emulation-based AIs instead of coded-algorithm AIs. I'm glad he's working on it; I do not want AI-friendliness research to become an echo-chamber. Anders Sandberg wins the cake for answering my "Can you model this?" question before I'd even asked it.
The issue of how to spread research effort cuts to the heart of EA values. I have day-terrors about EA becoming a villain organization that actively hinders people from going after low-hanging fruit, lest we waste any resources on something that is not "THE TOP 3 TRUE CAUSES".

Open Questions

How many instances of bad incentive design can you spot in your everyday life? Contract theory provides a shiny-new interesting frame on how I rate and reward myself for my own work.

Andrew Gelman's mini-paper on the scientific replication crisis, shared for this figure. Does someone you know have the skill and lack-of-better-things-to-do to make a quick graphic with comparisons of 1%, 3%, 5%, 8% power? Would recommend coloring it in a tasteful rainbow, and blowing it up to desktop background size so you have plenty of plausible excuse to pass it around.

If there was one economic idea you could explain to everyone on earth, how would you cement it to as many people's system ones as possible? Econlog (quoting Steve Horwitz's AMA) recommends The idea that prices are 'knowledge surrogates', a critical form of expressing information necessary to making rational choices. See also 

Drones delivering prison contraband. What would it take for technology adoption to definitively outpace our ability to generate defenses against it? I worry we're already there with shoddy password security, and everything is kept functional through frantic patchwork and luck.

This article on cutting Latin America's murder rate got me thinking... I usually see redistribution touted as a solution to socio-economic inequality. It might be done through voluntary donation, it might be done through obligatory taxation. If your problems are rooted in social and economic inequality, What actually happens when you directly encourage and strengthen strong empathy ties across class borders? I'm sure this is horribly naive and there are obvious ways it goes hellishly wrong, but I'm curious what the specifics of those are. I would want to create an interchange of sorts... incentivize safe and mutually beneficial interactions between economic classes. Make pen pals and patreons. Open neutral meeting spaces with TSA-level security. Integration!


If you think we're wrong or unhelpful, or if you aren't sure, tell us!

October 9, 2016

A look through the flawed lens Oct. 9

Random Musings as I trawl the news and lesswrong diaspora.


Assortment of things that please or excite

A good heuristic mentioned on Andrew Gelman's blog: "If there's no report you can read, there's no study."

Jeff Kaufman tracked his schedule. I approve of numerical precision as a general habit.

Ben Hoffman had a lovely post on the principles of truth-friendly discourse. I already intended to make a round-up like this, but it helped me flesh out my criteria for evaluating meta-discourse.

Put a number on it's words on the question "Should a person who seeks to make the world a better place follow empathy or discard it?" moved me. I delighted in the number and unobtrusiveness of reference links it holds.

This econlog post deserves a mention for the line "Since I oppose preaching to the choir, I'm aiming the book at any human being interested in (a) the ethics and science of immigration, or (b) non-fiction graphic novels". I fiercely approve of this attitude; we don't have nearly enough manpower going into bridging inferential gaps.

I retract my lingering complaints about Petrov Day rituals. The proposed program doesn't over-glorify this one decision out of all of human history as nauseatingly as the people who talk up Petrov day in conversation.


Frowny faces on activities that do not tend towards truth or fulfillment.

On Andrew Gelman's blog: "It is OK to criticize a paper, even it isn’t horrible."
The backlash that Gelman got on criticizing a paper's statistics bothers me.  Critiquing half-good things is usually a better exercise in critical thinking than critiquing uniformly awful things.

I know am going to wind up linking this self-deprecating explanation of the rationalist community to a number of normie acquaintances on Facebook. I'm not sure this is actually a good idea, because this desire bases itself primarily on a bitter "Fight me" impulse.

This pun induces Mixed Feelings

Open Questions

The problem of Digital security exemplifies, in a less controversial way than most of my pet issues, the tragedy of people developing an attitude of bleak resignation that prevents implementing solutions. At minimum, I can deal with this myself by dumping all of the idea that are 'too depressing' to evaluate in one place and drawing out a random subsample of next actions. So literature review time: What do we know about how the general problem of decision fatigue manifests, and what does that imply about potential solutions?

I don't think it's fair to say that we have sold our values to the bottom line when people can hardly find any information except the price to make tradeoffs on. People fight viciously over the framing of where our meat comes from precisely because it has the power to massively shift beliefs and choices that affect millions of lives either way, and it's going to have that ability whether the shift is justified or not. I could ask how we're going to put a stop to Factories shutting down informed consent with legislation, and the (lesser but still worth mentioning) occasional problem of ARAs emotionally manipulating a populace that have their naivety sheltered and reinforced by third parties, but instead I'm going to ask Is there an app/website that crowdsources where are all the small local farms are, and what their specialties are?

Someone raised the question of whether humanitarian intervention constitutes an exception to the principal of non-interference in international law. Non-interference affects private actors much less, and one might assume that's due a commensurate lack of diplomatic protection or an assumption that non-government-affiliated individuals have substantially less power to effect changes. But if non-intervention is actually important for other reasons, then it's plausible that an effective altruist could find ways around that and cause problems. What purposes does the principle of non-interference serve,  and should they apply to sufficiently powerful private actors? The current analogs I see to the non-interference principle in the EA movement are tangential heuristics: 1) you are unlikely to predict local effects better than local actors, 2) nudging existing overton windows gets more results than pushing against public opinion, 3) Transparency and changing course quickly in response to evidence are both important and governments usually suck at them.

What research is there on whether lucid dreaming harms learning? I'd expect it to harm the beneficial effects of REM sleep, at least as it's used by your average lucid dreamer.

Are there potentially positive uses of IP spoofing? Or is the lack of measures against it entirely about implementation difficulty and opportunity cost?

Put a number on it's post on the question "Should a person who seeks to make the world a better place follow empathy or discard it?" moved me deeply, but it still feels like a big unanswered question. When do you want to use an empathy framing? I think this is an especially important question to ask if you're planning to design ritual as I hope to.
This article on 'saving science' has an interesting framing on endemic problems in scientific research. They propose that the major problems befalling science—failure to replicate, multiple test mining, positive bias, elitism—but suggest that these are all fundamentally caused by leaving the course of science to "the free play of free intellects" instead of wedding it to creating concrete effects in the real world. As a key example of this, they point to the innovative outputs of the Department of Defense. I am very on board with the idea that science really needs to be backed up with something other than itself, and could be persuaded that cross-field peer grants aren't enough to keep research focused on high impact questions. But their other case study in good science management is the National Breast Cancer Coalition, and one of the independent researchers I trust is in the process of writing a whole book on how the cancer research movement is utterly fail. I must ask, then, What would it look like for a research effort to be focused on the correct problem every step of the way? Is it possible to identify and reward when that is happening?

If you think I'm being wrong or unhelpful, or if you aren't sure, say something!

October 5, 2016

Theory of rationality

'Do not forget your purpose', I said to myself.

When you see lies and injustice to be fought, you will take up arms. You will stand and fight. You will make harsh moves and bitter choices. But the defeat of an enemy must never come at the cost of what you took up arms to protect. 'Do not forget your purpose.'

When you aspire to do great works, you will work. You will explore and rework. You will strive and persevere. But valiant attempts must never come at the cost of achieving. 'Do not forget your purpose.'

When you seek happiness, you will make meaning. You will forge connection and write stories. You will create safety and purpose. But maintaining the structure of life must never come at the cost of living it.

'Do not forget your purpose', I said to myself.