Install Theme

there is no “mainstream consensus” among intelligence researchers

vaniver:

nostalgebraist:

How’s that for a clickbait title? ;)

The motivation for this post was a tumblr chat conversation I had with @youzicha.  I mentioned that I had been reading this paper by John L. Horn, a big name in intelligence research, and that Horn was saying some of the same things that I’d read before in the work of “outsider critics” like Shalizi and Glymour.  @youzicha said it’d be useful if I wrote a post about this sort of thing, since they had gotten the impression that this was a matter of solid mainstream consensus vs. outsider criticism.

This post has two sides.  One side is a review of a position which may be familiar to you (from reading Shalizi or Glymour, say).  The other side consists merely of noting that the same position is stated in Horn’s paper, and that Horn was a mainstream intelligence researcher – not in the sense that his positions were mainstream in his field, but in the sense that he is recognized as a prominent contributor to that field, whose main contributions are not contested.

Horn was, along with Raymond Cattell, one of the two originators of the theory of fluid and crystalized intelligence (Gf and Gc).  These are widely accepted and foundational concepts in intelligence research, crucial to the study of cognitive aging.  They appear in Stuart Ritchie’s book (and in his research).  A popular theory that extends Gf/Gc is knows as the “Cattell–Horn–Carroll theory.”

Horn is not just famous for the research he did with Cattell.  He made key contributions to the methodology of factor analysis; a paper he wrote (as sole author) on factor analysis has been cited 3977 times, more than any of his other papers.  Here’s a Google Scholar link if you want to see more of his widely cited papers.  And here’s a retrospective from two of his collaborators describing his many contributions.

I think Horn is worth considering because he calls into question a certain narrative about intelligence research.  That narrative goes something like this: “the educated public, encouraged by Gould’s misleading book The Mismeasure of Man, thinks intelligence research is all bunk.  By contrast, anyone who has read the actual research knows that Gould is full of crap, and that there is a solid scientific consensus on intelligence which is endlessly re-affirmed by new evidence.”

If one has this narrative in one’s head, it is easy to dismiss “outsider critics” like Glymour and Shalizi as being simply more mathematically sophisticated versions of Gould, telling the public what it wants to hear in opposition to literally everyone who actually works in the field.  But John L. Horn did work in the field, and was a major, celebrated contributor to it.  If he disagreed with the “mainstream consensus,” how mainstream was it, and how much of a consensus?  Or, to turn the standard reaction to “outsider critics” around: what right do we amateurs, who do not work in the field, have to doubt the conclusions of intelligence-research luminary John Horn?  (You see how frustrating this objection can be!)


So what is this critical position I am attributing to Horn?  First, if you have the interest and stamina, I’d recommend just reading his paper.  That said, here is an attempt at a summary.

Keep reading

I disagree with several parts of this, but on the whole they’re somewhat minor and I think this is a well-detailed summary.

Note how far this is from Spearman’s theory, in which the tests had no common causes except for g! 

Moving from a two-strata model, where g is the common factor of a bunch of cognitive tests, to a three-strata model, where g is the common factor of a bunch of dimensions, which themselves are the common factor of a bunch of cognitive tests, seems like a natural extension to me. This is especially true if the number of leaves has changed significantly–if we started off with, say, 10 cognitive tests, and now have 100 cognitive tests, then the existence of more structure in the second model seems unsurprising.

What would actually be far is if the tree structure didn’t work. For example, a world in which the 8 broad factors were independent of each other would totally wreck the idea of g; a world in which the 8 broad factors were dependent, but had an Enneagram-esque graph structure as opposed to being conditionally independent given the general factor would also do so.


When it comes to comparing g, Gf, and Gc, note this bit of Murray’s argument:

In diverse ways, they sought the grail of a set of primary and mutually independent mental abilities. 

So, the question is, are Gc and Gf mutually independent? Obviously not; they’re correlated. (Both empirically and in theory, since the investment of fluid intelligence is what causes increases in crystallized intelligence.) So they don’t serve as a replacement for g for Murray’s purposes. If you want to put them in the 3-strata model, for example, you need to have a horizontal dependency and also turn the tree structure into a graph structure (since it’s likely most of the factors in strata 2 will depend on both Gc and Gf).


Let’s switch to practical considerations, and for convenience let’s assume Caroll’s three-strata theory is correct. The question them becomes, do you talk about the third strata or the second strata? (Note that if you have someone’s ‘stat block’ of 8 broad factors, then you don’t need their general factor.)

This hinges on the correlation between the second and third strata. If it’s sufficiently high, then you only need to focus on the third strata, and it makes sense to treat g as ‘existing,’ in that it compresses information well.


This is the thing that I disagree with most strenuously:

In both cases, when one looks closely at the claim of a consensus that general intelligence exists, one finds something that does not look at all like such a consensus. 

Compared to what? Yes, psychometricians are debating how to structure the subcomponents of intelligence (three strata or four?). But do journalists agree with the things all researchers would agree on? How about the thugs who gave a professor a concussion for being willing to interview Charles Murray?

That’s the context in which it matters whether there’s a consensus that general intelligence exists, and there is one. Sure, talk about the scholarly disagreement over the shape or structure of general intelligence, but don’t provide any cover for the claim that it’s worthless or evil to talk about a single factor of intelligence.

Keep reading

Magic, Hard, Soft, and Rooted →

bambamramfan:

nostalgebraist:

bambamramfan:

As you may know, Brandon Sanderson (author of the Mistborn series, among other great work)writes about hard magic, the kind with rules, and soft magic, the kind that’s unpredictable and mysterious. He prefers to write the former and argues that problems should only be solved by magic when the magic makes sense, so that the author doesn’t take the easy way out.

I think this phrasing reveals a lot about what rationalist authors think of romantic fantasy. And it’s at once both very intuitive, and if you think about it, utterly incomprehensible.

What the fuck does “take the easy way out” mean???

Sanderson means “I have set up a hard system so that the heroes have to think to figure out a solution that comes from those rules, instead of fiating a deus ex machina that took me no effort to puzzle out.” And that sure sounds like hard work by him, but how is the opposite easy?

If someone writes an unsatisfactory story where DeM magical solutions are used, and people dislike the ending and the entire set of characters and themes because of it, was that easy. I don’t think failing a math test counts as easy. If you wrote a bad story, that should be assessable from the story itself, and not only from how much effort the author used.

And if people do like the end? Is that a worse story than the hard systems? “What the hell she could have used her magic shoes to go home at any time? This whole ruby slippers and there’s no place like home is MAIL FRAUD.” Like, no, it’s still pretty good. There was real effort there by the hero (the emotional journey to reveal what you truly want) even if the system was metaphorical as hell.

I think the quote actually makes a lot of sense on a slightly different reading.  As I understand it, “taking the easy way out” is a description of an event in the writing process, not a property of the finished work.  It’s not that deus ex machina solutions never “work” in the finished product; instead, Sanderson and Moseman are pointing to a dynamic in the writing process which tends to produce the kind of deus ex machina solutions that don’t work.

Specifically, I think the point makes the most sense if we take “solving problems by magic” to mean solving the writer’s problems by magic: running into some plotting snafu with no obvious resolution that works in terms of character and theme and all that, and forcibly undoing the snafu with in-universe magic.  Sometimes, such a solution will “works in terms of character and theme and all that,” but the point is that it will be appealing whether or not it does.  Some di ex machina are there because a deus ex machina was really the right thing for the story at that moment, but many are there just because the writer wanted to resolve a plotting snafu.  The latter type can work artistically, but only by happy accident.

You can’t look at a story and say definitively whether the writer “took the easy way out” with some decision, but you can look at a story and say “this deus ex machina doesn’t work artistically.”  And if you ask why the author chose to put such a thing in their story, often the answer will be that they introduced it to solve a plotting problem without doing the hard work necessary to invent a solution that coheres artistically with the rest of the story.  That is, they took the easy way out.

So under your explanation, the model of writing is “I have brought the characters to a certain point. I know at the end of the story they need to be somewhere else. So I have a problem to solve, of how to get them from A to B.” And the low-effort way to solve this (a wizard appears and does it!) is “taking the easy way out.” Whereas a more praiseworthy author would have a consistent system in place or have figured out some in universe trick.

So serious question: does anyone do this? Not just the easy way out, but even that entire conception of writing. Like in The Northern Caves, did you know where you wanted to end up, but just didn’t know how you’d get there? Because TNC doesn’t read in the slightest like that.

This is my personal experience with writing, and of all the writers I have observed. The theme of the work works with the characters to take the narrative in certain directions, and that’s just where everything ends up. Or maybe you knew the end from the get go, so everything the character does has been like an arrow leading up to this point. But there’s never been a… puzzle to solve, of how to get this character to cross the final gap, whereby my choices are “magic!” or “hard work of coming up with a logical system.”

Like I really don’t think Frank Baum got 90% of the way through Oz and said “shit, how is Dorothy getting home.” It’s not a matter of how he answered that question, but I don’t think the question even came up. But, maybe other authors will say “yes, that is a gap we must frequently solve.”

In which case, a more interesting differentiation is not between authors who use hard versus easy methods of solving these “plotting problems” but authors who write such that a plot problem even comes up, versus those who don’t.

Yes, my personal experience with plotting is like what I describe (I am an example of someone who “writes such that plot problems come up”).  Plotting TNC was a giant headache at times, as was plotting Floornight.

I don’t have any ready examples at hand, but I feel like writers talking about “plotting problems” is a fairly common thing – like, on writing advice sites, or blogs of writers who talk a lot about their process.

When I am designing a plot, there are always pieces that are more certain and less certain.  I can know exactly what a scene 2/3 of the way through is going to look like, but not how we’ll get there; I can have one character’s story planned in a lot of detail while a second character’s is very fuzzy, even though the two characters are supposed to constantly interact; all sorts of other permutations are possible.

The basic reason that “plotting problems” come up is that this uncertainty doesn’t follow a causal-like pattern, even though narratives are supposed to (usually) obey the rules of causality.  In real life, uncertainty increases as you think about times further and further in the future; in a plot under construction, the past can be hazier than the future, the cause hazier than the effect.  And you can have networks of causation that make sense, so long as certain fuzzy pieces spit out certain effects – event X makes sense thematically and character-wise in all these different ways for everyone/everything involved but it requires character Z to do a particular thing beforehand, and I always sorta assumed character Z was going to do that and never thought about it explicitly, but now I need to find a reason for Z to do that, so that this event that naturally flows from every other piece can still happen … 

I guess I am confused by the alternative experience you describe because it sounds so easy – are you saying that for many (most?) writers, there is never such thing as a “plotting problem”?  That a plot which makes both thematic and causal sense just comes to them naturally, in one fell swoop?

themodernsound asked: So while we're talking about bupropion; to my knowledge every Serotonin-Dopamine Reputake Inhibitor yet researched has been shelved as a potential anti-depressant because of abuse liability. Why, in your estimation, would this be any different than combining bupropion with an SSRI (discounting NRI activity of course but that would really only further highlight my curiosity/confusion)?

slatestarscratchpad:

nostalgebraist:

slatestarscratchpad:

It shouldn’t be. The serotonin should be irrelevant. The interesting question is why bupropion doesn’t have abuse potential.

(it sort of does - there are people who will abuse bupropion - but it’s not that common and you have to be desperate)

The answer is “nobody knows anything about any of this”. Sinemet is Literally Dopamine, and you can take it and recover from hypodopaminergic disorders, yet it’s not addictive. Pramipexole is a great dopamine agonist and it’s not addictive either.

My guess is this stuff has to do with where in the brain it increases dopamine, how quickly, how much, by what method, etc. But that’s all just a guess.

We totally know about this!  Some dopamine “reuptake inhibitors” bind to the dopamine transporter when it’s in the “open configuration” and others when it’s in the “closed configuration,” and the difference maps onto abuse potential: cocaine and methylphenidate are on one side (”binds to open DAT”), bupropion/hydroxybupropion and modafinil are on the other(”binds to closed DAT”), and there are some more esoteric/research-type chemicals on each side which also fit the pattern.  Here is a paper about it, and here’s another.

The latter paper speculates, at the end, that the DAT is just more frequently in the open configuration than the closed one (at least I think that’s what they’re saying).  So drugs that bind to open DATs get more chances to bind than those that bind to closed DATs, all else being equal.

Of course, all else isn’t ever equal, and this is presumably just one (important) contribution to the overall effect, along with the usual ones like binding affinity and rate of delivery to the brain.  Bupropion is particularly complicated because it is largely a prodrug to the metabolite hydroxybupropion when taken orally (you end up with way more hydroxybupropion than bupropion in plasma), and hydroxybupropion is a much weaker dopamine reuptake inhibitor than bupropion.  As @wirehead-wannabe mentioned, people do (ab)use bupropion recreationally by insufflating it (or injecting it), which bypasses first-pass metabolism, so you get more actual bupropion as opposed to hydroxybupropion in your brain (and of course it is delivered way faster).  It makes sense that this would be more reinforcing, although one would think the “binds to closed DAT” thing would still mean it isn’t all that reinforcing.  (Which seems likely to be true: if snorted bupropion produced a high anywhere near comparable to cocaine, you’d think it would get really widely abused and be made a controlled substance.)

(N.B. the first paper linked above is about modafinil, which is apparently a “binds to closed DAT” reuptake inhibitor, like bupropion.  From what I have read, e.g. this paper, it looks like that is probably all modafinil is; the fancy hypotheses about histamine and orexin don’t explain anything that dopamine reuptake inhibition wouldn’t.  So modafinil is probably a much “cleaner” example of what you get from “binds to closed DAT” inhibitors, without all the bupropion-specific weirdness discussed above.)

Anyway, I don’t know anything about SDRIs, but if the ones mentioned by the anon asker were the “binds to open DAT” kind of DRI, that would explain everything nicely.

Thanks, I didn’t know that (and must have missed the past few times you’ve blogged about it).

I’m a little confused, though. You have the positive effects of stimulants and you have the abuse potential. If modafinil has fewer opportunities to bind than amphetamine, it sounds like you would need a higher dose in order to get the same level of positive effects. But wouldn’t that also increase the abuse potential? Ie why does the open/closed thing change the effectiveness/abuse ratio?

My impression is that this is not well-understood; the open/closed thing is a relatively new idea, and the papers tend to say things like “we know this relationship to abuse potential is there, but not why.”  However, I should also say that these papers involve a lot of biochemical detail I don’t understand, and that anything I say about explanations for the relationship (as opposed to its existence) should be taken with a grain of salt.

That said: in the model I have in my head, the open/closed thing affects the an upper bound on how much inhibition the drug can ever do, no matter the dose.  That is, the most a “binds to closed” drug can possibly do is bind to every single DAT in the closed configuration – but if that’s always just a small fraction of the DATs in the nervous system, then that puts a cap on the intensity of the effect, no matter what the dose.  Analogy: something that can only happen on Wednesdays can never happen more than 1/7 of all days, even if it happens every Wednesday; by contrast, something that can happen any day except Wednesday could easily happen on more than 1/7 of all days, even if it doesn’t happen every day it possibly could (which would be 6/7 of all days).

Oh, also, I think the comparison should be cocaine vs. modafinil rather than amphetamine vs. modafinil – cocaine is just a DRI, amphetamine’s mechanism is more complicated.  (Something like “releasing agent + reuptake inhibitor“, but TBH I don’t fully understand it.  Releasing agents can have very different effects than reuptake inhibitors: compare SSRIs to MDMA!)

(This is obviously not your fault, but I really should be working, so I’m going to try to stay away from tumblr for at least the next few hours.)

There are two ways we can produce automobiles. We can build them in Detroit or we can grow them in Iowa. Everyone knows how we build automobiles. To grow automobiles, we first grow the raw material from which they are made – wheat. We put the wheat on ships and send the ships out into the Pacific. They come back with Hondas on them.

From our standpoint, growing Hondas is just as much a form of production – using American farmworkers instead of American autoworkers – as building them. What happens on the other side of the Pacific is irrelevant; the effect would be just the same for us if there really were a gigantic machine sitting somewhere between Hawaii and Japan turning wheat into automobiles. Tariffs are indeed a way of protecting American workers – from other American workers.

David Friedman (via atidd)

How do agricultural subsidies play into this?

(via nostalgebraist)

“Those who raise corn should not be taxed to encourage those who desire to raise beets.The power to tax was never vested in a Government for the purpose of building up one class at the expense of other classes.”

Subsidies protect American farmers - from other American farmers (sugar is one of the most heavily protected products at around 55%; grains are iirc. ~20% and most other products have comparatively little subsidy, ~0-5%). And from American taxpayers who otherwise might have more money to spend on agricultural products. And from producing the things that would be genuinely most in demand. And so on.

(via oktavia-von-gwwcendorff)

What I mean is, supposing that agricultural subsidies stay basically the same while the tariff is implemented, wouldn’t they absorb some of the effects of lower demand for wheat production on individual farmers, where the same is not true for similar effects on individual auto workers?

Like a theory of the second best thing

(via oktavia-von-gwwcendorff-deactiv)

Performance Trends in AI →

you-have-just-experienced-things:

you-have-just-experienced-things:

nostalgebraist:

A number of people recommended this post to me, and it is indeed good and worth reading.  I say that only partly because it provides evidence that aligns with the preconceptions I already had :P

Specifically, here is what I wrote in this post:

I was thinking about this stuff after I was arguing about deep learning the other day and claimed that the success of CNNs on visual tasks was a special case rather than a generalizable AI triumph, because CNNs were based on the architecture of an unusually feed-forward and well-understood part of the brain – so we’d just copied an unusually copy-able part of nature and gotten natural behavior out of the result, an approach that won’t scale

The gist of Sarah’s post is that in image recognition and speech recognition, deep learning has produced a “discontinuous” advance relative to existing improvement trends (i.e., roughly, the trends we get from using better hardware and more data but not better algorithms) – but in other domains this has not happened.  This is what I would expect if deep learning’s real benefits come mostly from imitating the way the brain does sensory processing, something we understand relatively well compared to “how the brain does X” for other X.

In particular, it’s not clear that AlphaGo has benefitted from any “discontinuous improvement due to deep learning,” above and beyond what one would expect from the amount of hardware it uses (etc.)  If it hasn’t, then a lot of people have been misled by AlphaGo’s successes, coming as they do at a time when deep learning successes in sensory tasks are also being celebrated.

Sarah says that deep learning AI for computer games seems to be learning how to perform well but not learning concepts in the way we do:

The learned agent [playing Pong] performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection.  Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

This is reminiscent of something I said here:

My broad, intuitive sense of these things is that human learning looks a lot like this gradient descent machine learning for relatively “low-level” or “sensorimotor” tasks, but not for abstract concepts.  That is, when I’m playing a game like one of those Atari games, I will indeed improve very slowly over many many tries as I simply pick up the “motor skills” associated with the game, even if I understand the mechanics perfectly; in Breakout, say, I’d instantly see that I’m supposed to get my paddle under the ball when it comes down, but I would only gradually learn to make that happen.

The learning of higher-level “game mechanics,” however, is much more sudden: if there’s a mechanic that doesn’t require dexterity to exploit, I’ll instantly start exploiting it a whole lot the moment I notice it, even within a single round of a game.  (I’m thinking about things like “realizing you can open treasure chests by pressing a certain button in front of them”; after opening my first chest, I don’t need to follow some gradual gradient-descent trajectory to immediately start seeking out and opening all other chests.  Likewise, the abstract mechanics of Breakout are almost instantly clear to me, and my quick learning of the mechanical structure is merely obscured by the fact that I have to learn new motor skills to exploit it.)

It is a bit frustrating to me that current AI research is not very transparent about how much “realizing you can open treasure chests”-type learning is going on.  If we have vast hardware and data resources, and we only care about performance at the end of training, we can afford to train a slow learner that can’t make generalizations like that, but (say) eventually picks up every particular case of the general rule.  I’ve tried to look into the topic of AI research on concept formation, and there is a lot out there about it, but a lot of it is old (like, 1990s or older) and it doesn’t seem to the focus of intensive current research.


It’s possible to put a very pessimistic spin on the success of deep learning, given the historically abysmal performance of AI relative to expectations and hopes.  The pessimistic story would go as follows.  With CNNs, we really did find “the right way” to perform a task that human (and some animal) brains can perform.  We did this by designing algorithms to imitate key features of the actual brain architecture, and we were able to do that because the relevant architecture is unusually easy to study and understand – in large part because it is relatively well described by a set of successive “stages” with relatively little feedback.

In the general case, however, feedback is a major difference between human engineering designs and biological system “design.”  Biological systems tend to be full of feedback (not just in the architecture of the nervous system – also in e.g. biochemical pathways).  Human engineers do make use of feedback, but generally it is much easier for humans to think about a process if it looks like a sequence of composed functions: “A inputs to B, which inputs to C and D, which both input to E, etc.”  We find it very helpful to be able to think about what one “part” does in (near-)isolation, where in a very interconnected system this may not even be a well-defined notion.

Historically, human-engineered AI has rarely been able to match human/biological performance.  With CNNs, we have a special case in which the design of the biological system is unusually close to something humans might engineer; hence we could reverse engineer it and get atypically good AI performance out of the result.

But (I think; citation needed!) the parts of the brain responsible for “higher” intelligence functions like concept formation are much more full of feedback and much harder to reverse engineer.  And current AI is not any good at them.  If there are ways to do these things without emulating biology, many decades of AI research has not found them; but (citation needed again) we are no closer to knowing how to emulate biology here than we were decades ago.

It is a bit frustrating to me that current AI research is not very transparent about how much “realizing you can open treasure chests”-type learning is going on.

This seems like a problem with the current paradigm of throwing really big ANNs at things. Many-layered neural networks don’t look as attractive when it is important to understand why the model made a certain decision (e.g., when making algorithmic trades).

Currently, unless I am deeply mistaken about the state of the art, answering the “does the model know it can open treasure chests” question is not easy. (Then again, it was only easy in the previous paradigm of SVM, random forests, etc, because those models really aren’t that sophisticated in comparison.) This makes me somewhat skeptical of generalizations like this:

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

At face value, this is probably true. But it’s not clear to me (at this specific point in the research process) that we would be able to recognize that “conceptual learning” is happening in a given ANN even if it was actually happening. I am hoping that we can just solve this by simply getting better at understanding why big many-layered networks behave in certain ways.

(I was trained in the previous paradigm. The best thing I can say about the people I worked with is that they were extreme sticklers about understanding exactly how every model worked, always. Nowadays, my coworkers use a GPU cluster + massive neural nets to extract tables from PDF’s without caring about such things.)

@nostalgebraist​:  

@you-have-just-experienced-things: what you say seems true in the general case, but simple video games are a particular case where it’s easy to make the judgment, at least intuitively – so it bothers me that people /are/ throwing ANNs at these games but /not/ treating them as a test bed for this question.

(cont.) both sarah and i had to look at supplemental videos of gameplay and try to answer these questions for ourselves; where are the papers that put the questions front and center?  researchers can interpret gameplay videos just like anyone else, after all

I generally agree with this. And I strongly endorse the “Where are the papers?” question, especially because it seems like it can be answered by understanding the trends/incentives of the deep learning folks doing the research.

Re the Pong video: I don’t agree with Sarah’s interpretation–it doesn’t look like the right paddle ever stops moving, which suggests the model was created with only two output states (Up/Down) rather than Up/Down/Stop. This could explain the jiggling as the ball gets closer to the paddle.

This is kind of why I am wary of making these interpretations based on the observed output, even in these simple cases. There is a lot of nuance to how these networks are trained that is not necessarily obvious from looking at gameplay videos. What I really want is the kinds of analysis you can do on a much simpler model, where (once you understood the math) there is no ambiguity about why a classifier was making a certain prediction. I acknowledge that this is way too high of a standard, though.

(via you-have-just-experienced-things)

@philippesaner

For what it’s worth, I think you investigating alternative solar cell designs is pretty valuable even if they all turn out to be terrible and your papers always gloss over that.

Honesty would be better, of course, but the gloss doesn’t destroy the thing it makes shiny.

I’m not @thepenforests​ (to whom this was a response), but they and I seem to share similar frustrations about academia, and FWIW I think the issue is that the gloss does kinda destroy the thing it makes shiny

If you take the question to be “are people investigating alternative solar cell designs, or not doing that?”, then yeah, “people are doing that but not in an ideal way” looks pretty good relative to the alternative of not doing it at all.  But those aren’t really the relevant alternatives.  I think we should take it as a given that someone is going to be “investigating alternative solar cell designs” – because the reaction you had here, that this is clearly a valuable endeavor, is a natural reaction that many people will have, and some of those people will have a lot of money and will want to pay others to do the thing (whether this is in the form of government grants, philanthropic gifts, venture capital, or whatever).

So if this kind of research is going to be done one way or another, the remaining question is how well or poorly it is done.  And a pressure to exaggerate successes and paper over serious flaws is going to make the designs worse than they could be if the same researchers didn’t face those pressures.  If you’re free to say “okay, this design concept is clearly flawed on a fundamental level, let’s throw it away and do something else,” that saves you from wasting more time on it – but in the grants/publications/reputation game, you want to have a consistent personal brand where you keep building on your own work and weaving a narrative.  (“In 2009 we introduced the NewThing, in 2010 we proved that the NewThing has all these cool properties, in 2011 we showed how the NewThing has unexpected applications and outperforms SomeOldThing in them … in 2017 the NewThing paradigm is more exciting than ever”).  So bad ideas stick around, people develop strategies for deflecting criticism rather than adapting in response to it, and scientists waste a lot of brain cycles.  This is frustrating, and is in some ways more frustrating in a subject that is of real practical importance.

I suppose there is a certain arbitrariness in taking one kind of societal phenomenon as fixed (“someone will always fund this research”) but treating another as potentially variable (“this research doesn’t have to be done in this frustratingly inefficient way, it could be better”).  Maybe the latter is just as difficult to change as the former.  But at the very least, it is much less obvious, to most people, that the latter is actually true, which leads to frustrated disappointment on the part of people like @thepenforests – it’s obvious that this research is being done (it’s in the news, on academic websites, etc.), but you don’t necessarily know how inefficiently it is being done until you actually start doing it.  And I think this disappointment makes perfect sense as an emotion reaction, even if it turns out there’s nothing we can do about the situation, or that in hindsight we shouldn’t have expected anything better.  (Getting the incentives for researchers to align well with scientific progress is not an easy task, and as @napoleonchingon has said upthread, similar misalignment happens in industry too; maybe this is just part of the tragedy of human existence or whatever)

(via philippesaner)

robertskmiles:

nostalgebraist:

pluspluspangolin:

nostalgebraist:

Computer question:

Esther wants to block herself from visiting tumblr on her computer (at least for a time).  There are various productivity apps out there like Stayfocusd, Coldturkey, etc., but some of them are pretty easy to disable if you actually want to – they’re meant to make it inconvenient to do something rather than actually stop you, on the assumption that if it’s inconvenient enough you’ll avoid it.  This is more of an “Odysseus wants to tie himself to the mast” situation.

Does anyone have suggestions for really hard-to-circumvent ways of blocking websites on Windows?  Has to be something that can block individual tumblrs, not just tumblr.com (this is a problem with using the hosts file, which doesn’t take wildcards).  For operationally defining “hard” here, “intimidating” is much more relevant than “inconvenient” – for instance it would be good if you’d have to do something configuration-dependent that can’t be distilled into one universal list of steps in a StackOverflow answer.  (By contrast, a single universal list of steps that is merely long is much less good.)

blocking individual tumblrs in the hosts file should be doable, IIRC

you could instead implement blocks on your router (most routers have parental controls or similar features that can be used to block sites) then set the router password to something that you know but Esther doesn’t, so as to definitely prevent her from circumventing the block

Oh I realized I didn’t make this clear: we aren’t in the same location right now and solutions here should assume we aren’t.  We considered similar things but I’d have to be able to enter the password remotely.  Doing that through remote desktop could work, maybe, so we’ll try that if there isn’t a more convenient option

If the password is sufficiently long and high entropy, you can just read it to her over the phone as she enters it and she won’t be able to recall it later, right?

Ohh wow, yeah, that’s a great idea.  Thanks!

(via robertskmiles)

Kids can't use computers... and this is why it should worry you →

you-have-just-experienced-things:

nostalgebraist:

evolution-is-just-a-theorem:

ilzolende:

katiewompus:

I might as well have asked her ‘Can you tell me how to reticulate splines using a hexagonal decode system so that I can build a GUI in Visual Basic and track an IP Address.’

Here’s an idea. When they hit eleven, give them a plaintext file with ten-thousand WPA2 keys and tell them that the real one is in there somewhere. See how quickly they discover Python or Bash then.

Quality lines in this article.

I feel Called Out, except I actually have done things like upgrade memory. But I still would need to ask for help or use Atom if I wanted to, say, search for a string in a folder of text files…

First:

  • Find all instances of string: grep -r “$string” path/to/folder
  • Find all instances of string in text files only: find . -name “*.txt” | grep -r “$string” (many variants of this exist depending on exactly what you want)

Second (warning for tech elitism and me being really rather mean):

Keep reading

@evolution-is-just-a-theorem​ you sure know how to write an entertaining takedown <3

The original blog post is so bad, and so comformant with the worst of Annoying Tech Guy stereotypes, that I almost can’t believe it’s sincere – but that’s my bias towards the nicer side of Poe’s law speaking, and the guy has a web presence (on Github etc.) that’s consistent with his self-presentation in the post.

There are so many weird little things to grumble about – like, those examples of people who “don’t know how to use a computer”?  A lot of those sound similar to the phenomenon where one person spends hours hunting for a bug and then someone else reviews their code and notices it quickly, because they weren’t focused on the same details.  Yeah, the guy whose ethernet cable was unplugged looks “dumb” in retrospect, but it’s not like ethernet cables are constantly getting unplugged by gremlins in everyday life; everyone comes in to a problem with some base assumptions and “no one messed with the hardware because why would they” is pretty good as base assumptions go.

(There’s another point which is still vague in my mind, but runs something like this: computer technology is important enough that we should educate people about it, but not necessarily by forcing them to confront the inner workings of that technology when they’re trying to use it for other things.  It might be good to know how the plumbing of my house works, but I’m glad I don’t usually have to think about the plumbing at all when I just want some tap water.  And the knowledge that makes someone a good IT computer-fixer != the knowledge needed to understand the big-important-scary stuff happening with computers in the world, necessarily.  Does my plumber understand water scarcity issues any better than I do?)

The original article is so bad that it’s pointless to respond to it directly. I think I understand what the OP is trying to argue against, he just does it in the worst possible way. Here is an attempted steelman/reinterpretation:

1) There is a pretty flawed narrative out there about digital natives and how technology comes naturally to them. This is sometimes taken to absurd extremes. (My aunt even told me once that I should be worried about my job security as a programmer, because she read an article about how all these “digital natives” are so inherently good with computers that they will make me obsolete.) I don’t think this narrative is particularly important, but it is observably wrong.

2) When you work in IT support, you see the worst of humanity. I only did this for 18 months in college, but for those 18 months it was a miserable experience. IT support is a service job, and most people treat service workers horribly–but something about dealing with technology is uniquely frustrating for many people, and they are more than willing to unleash that frustration on whoever is helping them. It’s difficult to describe how horrible it can feel to be on the receiving end of this unless you’ve experienced it. It’s regrettable that the OP used this essay to complain about his job, because that emotion really needs to be kept separate from the argument.

3) People like treating things as magic, and technology especially so. Learned helplessness abounds. This is also one of the reasons why IT is so soul-crushing; @evolution-is-just-a-theorem seems to agree here. (Incidentally, I feel pretty put off by their critique of the original article, for reasons I can’t articulate. Might just be general uneasiness about snark.) Many people would benefit from investing just a little bit of time into basic IT skills like “how to google for an error message.” There is, of course, a point of diminishing returns to learning how to be good at debugging computer issues, but most people are nowhere near that point yet.

4) Anecdotally, I observe a strong correlation between programming knowledge and being concerned about big-picture technology issues, specifically government surveillance. I don’t have a good explanation for this, and I don’t know if I’m simply looking at a biased sample. Nor am I confident that the causation doesn’t go the other way. But it seems like understanding how things work on a slightly deeper level does lead people to think more deeply about the big-picture stuff. (I would definitely trust my plumber’s opinion about water scarcity issues above my own.)

I agree with most of 1-4 above, when phrased this way. And I think if you strip away all the terrible stuff from the original article, this stuff is there beneath the surface. The OP is just obnoxious.

(via you-have-just-experienced-things)

tanadrin:

nostalgebraist:

@tanadrin

But as for the Canon and the Nature of Literature, and the Western Tradition–nobody gives two shits about any of that anymore, except Harold Bloom, and he writes the literary equivalent of pop science because nobody in academia thinks he has anything interesting to say. I think the modernists were the last people who thought that stuff mattered. At this point, even postmodernism is starting to acquire a certain patina of fustiness, and between trends like the digital humanities and cognitive literary theory and the ever-expanding circle of things English departments consider worth study, I wouldn’t bat an eye if somebody comes out with a paper on the statistical analysis of Harry Potter fanfiction any day now. Hell, it’s probably already been published.

I may have miscommunicated there.  My point in mentioning all these influences on the concept of “literature” was not to say that modern English departments study that concept as a concept, but rather than their objects of study are, to a first approximation, the objects that concept groups together.

This can be true even if they don’t explicitly place any value on the “stronger” forms of the concept (”the Canon and the Nature of Literature, and the Western Tradition”).  Historically, the set of books studied by English professors – which is a very, very small subset of all the books that are out there – was shaped by those ideas (along with everything else in the bag), and although the set has changed over the years, there’s a large amount of continuity.

As you say, the set is expanding, but the inclusion/exclusion patterns are hard to make sense of without thinking about this history.  Think of it like the borders of an expanding empire: while the range of climates and cultures represented inside (say) is growing with time, all the specifics of what is included and what is excluded are still only explicable with reference to a history of expansion, starting in a particular place, and likely still “centered” there in several senses of the term.

I know I still haven’t made it clear what I think is being done wrong and could be done better – one does need some borders, and attempts to justify borders as non-arbitrary often go very wrong –but It’s early for me and I haven’t had all my coffee yet, so I’m going to stop for now

Hmm. You’re not wrong about what people study being shaped by the history of English as a subject, of course, but I guess my reaction to that is “that is what academics are interested in.” It’s not a vexing or a difficult question; literature academics are book nerds educated by other book nerds, so there are going to be some diachronic trends in book nerdery, but that doesn’t preclude, and nothing about professional book nerdery as a culture right now precludes, somebody looking up tomorrow, saying, “you know what’s really interesting? Fanfiction!”, and organizing a conference or starting a journal and flinging that particular border of the empire in that direction. The current shape of the borders might give the illusion that there’s some weight or inertia to the borders, is what I’m saying, and right now (at least to me), it feels like that’s very much not the case, especially compared to what people outside the field envision.

One benefit to the social obligation to go to university in the English-speaking world right now is that universities have gotten a lot bigger compared to, say, fifty years ago, and they hire more faculty, and you can’t have every professor in every department in the country working on the Victorian novel anymore even if there was still a strong cultural pressure in that direction. But I think a lot of people’s view of the humanities, and English studies in particular, is based on stereotypes of the situation as it was in some respects eighty or fifty or twenty years ago (depending on which feature they’re talking about).

This is interesting (and heartening), thanks.

I’m realizing that some of my attitudes here are not very carefully targeted.  The place where these issues grate on me the most is actually non-academic literary culture, like mid-to-highbrow book reviewing – I was eventually going to mention people like George Steiner and James Wood, who always strike me as almost unnaturally at ease in the weird melange of literary culture.  (Harold Bloom’s like that too, but with him everyone perceives the weirdness.)  Like, these book critics who have read very widely across the field of “literature” as broadly conceived, and respond to all of it as if it’s all of a piece somehow, even though it’s this crazy historically generated chimera.

IDK if that’s at all clear (I may try to clarify it later), but anyway, that isn’t a reaction to academia.  And insofar as I do see a continuity with academia, it may not be so much with the actual practices of the academic humanities as in the rhetoric surrounding them.  Some of this rhetoric is a result of external challenges to the utility of the humanities, but not all, I think.  When I was an undergrad (at a small liberal arts college, admittedly) people really did talk about the humanities “teaching you to think” or somehow elevating your soul (though they wouldn’t phrase it that way), and the English majors I knew saw themselves as inductees in a certain form of intellectualism rather than just book nerds going pro.  That ludicrous article PL linked is an extreme example, but the rhetoric is familiar, from the undertones of scores of casual conversations I had in college.

Anyway, shit’s complicated and my thoughts about all this are not in good order at the moment, it seems

(via tanadrin)

@tanadrin

But as for the Canon and the Nature of Literature, and the Western Tradition–nobody gives two shits about any of that anymore, except Harold Bloom, and he writes the literary equivalent of pop science because nobody in academia thinks he has anything interesting to say. I think the modernists were the last people who thought that stuff mattered. At this point, even postmodernism is starting to acquire a certain patina of fustiness, and between trends like the digital humanities and cognitive literary theory and the ever-expanding circle of things English departments consider worth study, I wouldn’t bat an eye if somebody comes out with a paper on the statistical analysis of Harry Potter fanfiction any day now. Hell, it’s probably already been published.

I may have miscommunicated there.  My point in mentioning all these influences on the concept of “literature” was not to say that modern English departments study that concept as a concept, but rather than their objects of study are, to a first approximation, the objects that concept groups together.

This can be true even if they don’t explicitly place any value on the “stronger” forms of the concept (”the Canon and the Nature of Literature, and the Western Tradition”).  Historically, the set of books studied by English professors – which is a very, very small subset of all the books that are out there – was shaped by those ideas (along with everything else in the bag), and although the set has changed over the years, there’s a large amount of continuity.

As you say, the set is expanding, but the inclusion/exclusion patterns are hard to make sense of without thinking about this history.  Think of it like the borders of an expanding empire: while the range of climates and cultures represented inside (say) is growing with time, all the specifics of what is included and what is excluded are still only explicable with reference to a history of expansion, starting in a particular place, and likely still “centered” there in several senses of the term.

I know I still haven’t made it clear what I think is being done wrong and could be done better – one does need some borders, and attempts to justify borders as non-arbitrary often go very wrong –but It’s early for me and I haven’t had all my coffee yet, so I’m going to stop for now

(via tanadrin)