October 9, 2016

A look through the flawed lens Oct. 9

Random Musings as I trawl the news and lesswrong diaspora.

Highlights

Assortment of things that please or excite

A good heuristic mentioned on Andrew Gelman's blog: "If there's no report you can read, there's no study."

Jeff Kaufman tracked his schedule. I approve of numerical precision as a general habit.

Ben Hoffman had a lovely post on the principles of truth-friendly discourse. I already intended to make a round-up like this, but it helped me flesh out my criteria for evaluating meta-discourse.

Put a number on it's words on the question "Should a person who seeks to make the world a better place follow empathy or discard it?" moved me. I delighted in the number and unobtrusiveness of reference links it holds.

This econlog post deserves a mention for the line "Since I oppose preaching to the choir, I'm aiming the book at any human being interested in (a) the ethics and science of immigration, or (b) non-fiction graphic novels". I fiercely approve of this attitude; we don't have nearly enough manpower going into bridging inferential gaps.

I retract my lingering complaints about Petrov Day rituals. The proposed program doesn't over-glorify this one decision out of all of human history as nauseatingly as the people who talk up Petrov day in conversation.

Lowlights

Frowny faces on activities that do not tend towards truth or fulfillment.

On Andrew Gelman's blog: "It is OK to criticize a paper, even it isn’t horrible."
The backlash that Gelman got on criticizing a paper's statistics bothers me.  Critiquing half-good things is usually a better exercise in critical thinking than critiquing uniformly awful things.

I know am going to wind up linking this self-deprecating explanation of the rationalist community to a number of normie acquaintances on Facebook. I'm not sure this is actually a good idea, because this desire bases itself primarily on a bitter "Fight me" impulse.

This pun induces Mixed Feelings

Open Questions

The problem of Digital security exemplifies, in a less controversial way than most of my pet issues, the tragedy of people developing an attitude of bleak resignation that prevents implementing solutions. At minimum, I can deal with this myself by dumping all of the idea that are 'too depressing' to evaluate in one place and drawing out a random subsample of next actions. So literature review time: What do we know about how the general problem of decision fatigue manifests, and what does that imply about potential solutions?

I don't think it's fair to say that we have sold our values to the bottom line when people can hardly find any information except the price to make tradeoffs on. People fight viciously over the framing of where our meat comes from precisely because it has the power to massively shift beliefs and choices that affect millions of lives either way, and it's going to have that ability whether the shift is justified or not. I could ask how we're going to put a stop to Factories shutting down informed consent with legislation, and the (lesser but still worth mentioning) occasional problem of ARAs emotionally manipulating a populace that have their naivety sheltered and reinforced by third parties, but instead I'm going to ask Is there an app/website that crowdsources where are all the small local farms are, and what their specialties are?

Someone raised the question of whether humanitarian intervention constitutes an exception to the principal of non-interference in international law. Non-interference affects private actors much less, and one might assume that's due a commensurate lack of diplomatic protection or an assumption that non-government-affiliated individuals have substantially less power to effect changes. But if non-intervention is actually important for other reasons, then it's plausible that an effective altruist could find ways around that and cause problems. What purposes does the principle of non-interference serve,  and should they apply to sufficiently powerful private actors? The current analogs I see to the non-interference principle in the EA movement are tangential heuristics: 1) you are unlikely to predict local effects better than local actors, 2) nudging existing overton windows gets more results than pushing against public opinion, 3) Transparency and changing course quickly in response to evidence are both important and governments usually suck at them.

What research is there on whether lucid dreaming harms learning? I'd expect it to harm the beneficial effects of REM sleep, at least as it's used by your average lucid dreamer.

Are there potentially positive uses of IP spoofing? Or is the lack of measures against it entirely about implementation difficulty and opportunity cost?

Put a number on it's post on the question "Should a person who seeks to make the world a better place follow empathy or discard it?" moved me deeply, but it still feels like a big unanswered question. When do you want to use an empathy framing? I think this is an especially important question to ask if you're planning to design ritual as I hope to.
This article on 'saving science' has an interesting framing on endemic problems in scientific research. They propose that the major problems befalling science—failure to replicate, multiple test mining, positive bias, elitism—but suggest that these are all fundamentally caused by leaving the course of science to "the free play of free intellects" instead of wedding it to creating concrete effects in the real world. As a key example of this, they point to the innovative outputs of the Department of Defense. I am very on board with the idea that science really needs to be backed up with something other than itself, and could be persuaded that cross-field peer grants aren't enough to keep research focused on high impact questions. But their other case study in good science management is the National Breast Cancer Coalition, and one of the independent researchers I trust is in the process of writing a whole book on how the cancer research movement is utterly fail. I must ask, then, What would it look like for a research effort to be focused on the correct problem every step of the way? Is it possible to identify and reward when that is happening?


~*~
If you think I'm being wrong or unhelpful, or if you aren't sure, say something!

No comments:

Post a Comment

Want to give nice feedback, and don't know what to comment on? Look at the sidebar for some ideas.