Install Theme
rincewitch:
“idontcareifitsreal:
“pro-gay:
“ceeberoni:
“brylup:
“ceeberoni:
“jesusbeans:
“Or it’s a jewellery box…
”
what kind of jewelry box has usb ports. the truth is out there
”
It’s a fucking mirror
”
What kind of mirror has USB...

rincewitch:

idontcareifitsreal:

pro-gay:

ceeberoni:

brylup:

ceeberoni:

jesusbeans:

Or it’s a jewellery box…

what kind of jewelry box has usb ports. the truth is out there

It’s a fucking mirror

What kind of mirror has USB ports????????

image
image
image
image
image
image
image
image
image
image
image
image
image

confirmed

what the fuck happened here

holy shit

(via rincewitch)

lambdaphagy:

nostalgebraist:

untiltheseashallfreethem
replied to your post
“I keep linking to that Holbo post because it seems to get at something…”
Ok, so, I don’t have much stake in this debate, but… (1) a lot of liberal stuff seems to boil down to aesthetics too (what makes equality good? at some point the answers all ground out in “it just is” or “that’s what feels morally right to me”).
(2) maybe this is a difference between “logical” and “intuitive” thought — if liberals value reason more, and conservatives value intuition, then liberals will strive to give rational arguments and conservatives will give intuitive arguments, which
often look aesthetic. (3) … are there people who aren’t convinced by aesthetic arguments? I read that Jack Donovan quote and I am like “yes, I agree, those are exactly the things I hate, and we need to get rid of them”
you understand”. Do liberals not have equivalent things? “Look at the world, look at all the people suffering from poverty and inequality. Look how there is racism, and casual violence, and ignorance, and people not even caring about learning.”
How is that not an appeal to aesthetics in the same way?

Yeah, leftists definitely do the same thing as well.  What I’m saying is that it’s hard to find conservatives who are doing anything else.  People like, say, John Rawls and Jerry Cohen made lucid, rigorous arguments in favor of leftist conclusions; you could accuse them of using motivated reasoning to pick their starting axioms, or the like, but at least they are doing a kind of valuable service, in showing how certain ideas can be justified, and that those ideas aren’t self-contradictory.  Who is the conservative equivalent of Rawls or Cohen?  (Okay, maybe it’s Nozick.  But libertarians are kind of a special case here — they do make clear, if not necessarily correct, arguments.  The more traditionalist wing of the right is what I’m aiming at here.)

When I say “aesthetic” I don’t quite think I just mean “intuitive” — I mean something like “based on an overall mental image rather than an assessment of individual effects (costs and benefits).”  An assessment of individual effects can still be intuitive — you can still say something like “at first glance, that policy sounds like a really bad idea.”  But there is a way in which things can “seem like really bad ideas” because you can quickly think of specific terrible effects they will have without corresponding benefits, and another way in which things can “seem like really bad ideas” because they paint nasty pictures in your head rather than nice pictures, and these two ways are very different.

Again, I am not saying that leftists don’t do the picture thing (because they clearly do, a lot), but that on the left I see people trying to formalize the pictures, where I have not been able to find an equivalent on the right.  (And I don’t think it’s for a lack of trying?  I spend a weirdly large amount of time reading right-wing news sources and blogs, given that I almost never agree with them.)

Of course, there may be a larger gap here that cannot be bridged.  If that Donovan passage resonates with you then we are very different.  My point is that there is no “formal Donovan” in existence who might bring me around to his POV when it doesn’t have intuitive appeal to begin with.

This is a really interesting post.  I think it’s a bit uncricket though to compare Frum, a journalist, to Rawls and Cohen, two political philosophers.  A more intrafructal comparison might be Frum vs. Pollitt, or Oakeshott vs. Rawls.

And I get Rawls, but I’m kind of confused about how Cohen made that list.  Why not Socialism? is literally a book about how nice it would be to go on a camping trip with your friends and work together in the woods and share everything, and wouldn’t it be terrible if those friends didn’t share with you?  Therefore Marxism. You might say that it was merely a popular work, but it’s something Cohen thought made enough sense to publish with his name on it.  Plus it’s got over a hundred citations, so someone out there must be taking it seriously.  

All that aside, Rawls and Cohen belonged to a specific tradition of Anglo-American philosophy that self-consciously prized the traits you’re talking about.  (As John Perry said with apologies to Paul, now abideth these three: truth profoundness and clarity, but the greatest of these is clarity.)  Whereas the philosophical tradition currently seeping out of the most influential institutions of the American left is opposed to the idea of political philosophy in the key of Rawls.  I really hope I’m wrong about that.  But consider: “lived experience”, “ways of knowing”, “marginalization”, “sites of resistance”, “bodies”, “de-centering”.  Where did this language come from?  From badly-translated French critical theorists, overwhelmingly.  Not from Bertrand Russell, that’s for sure.

Anyway, I think the issue is that there is really only one good conservative ur-argument, which is that societies are complicated.  They rely on distributed knowledge that is never consciously articulated, and there are many more ways to break these mostly functioning arrangements than to improve upon them.  Hence changes are likely to be for the worse, &c.  But that argument proves too much so back you go, down to the object-level, to figure out whether this or that proposed improvement will in fact deliver without also leaking carcinogens into the social secret sauce, when you already admitted that you don’t even know the recipe in the first place.  

A lot of the people who made these arguments best (Hayek, Oakshott, Berlin, Popper, Posner, &c.) were not exactly rank-and-file movement conservatives.   Still, they were doing conservatism a favor.  And I wouldn’t say that they were writing straight from the amygdala, either.  

In any case, Sturgeon’s law is an underestimate for political writing across the spectrum.  I’d much rather learn of new writers who are attending to the criteria of argumentation that we both value.  For example, I learned of Mark Kleiman (an ideas man on the left if there ever was) through Reihan Salam, the country’s weirdest mainstream conservative and also a GA Cohen fan.  Perhaps people who appreciate one will appreciate the other as well.

PS: the other thing that puzzles me is that the guy who’s criticizing Frum here for being an impressionist is John Holbo.  Don’t get me wrong— Holbo does Holbo like no one can.  You can’t help but admire.  The zany, pyrotechnic style that has Plato ten paces out into thin air before he realizes he’s run off the cliff, and William Beveridge knocking out Harold MacMillan with an Acme Corp. accordioned boxing glove.  It’s impressive stuff, and I wish I could do it on command.  (Perhaps not if it meant never being able to turn it off, ahem).   But could you actually propositionally disagree with Holbo in the sense of identifying a premise you reject?  You gotta admit it’s not exactly theorem, proof, theorem proof.

Thanks for writing this – it’s exactly the kind of response I wanted and needed.  For instance, the reason I didn’t mention Oakeshott was that I actually just didn’t know who he was – or, rather, I’d heard his name before but had no idea he was the kind of figure I was looking for.

(Hayek I haven’t read but am wary of because the caricatures of the “road to serfdom” argument I’ve heard have made it like everything that’s wrong with slippery slope arguments: “first you’re doing something unobjectionable, and before you know it you’ll be doing terrible things in the same general category!  Brushing your teeth is the first step on the road to removing your teeth from your mouth with heavy-duty dental equipment, etc.”  But maybe he’s better than they say.  I’ve read a bit of Posner and he seems to suffer from that problem judges have where they’re smart but are used to writing huge amounts of stuff so the quality of their arguments varies wildly.  Anyway, any recommendations of particular texts would be welcome)

I mentioned Cohen more for analytical Marxism as a whole than for Why Not Socialism? – the point being that there is an “analytical Marxism,” and there isn’t an “analytical paleoconservatism” (or whatever), for pretty much the reasons you describe.  But to the extent that there are (perhaps untraditional, if not un-traditionalist) conservatives with the analytic phil. virtues, I’d like to be aware of them.  They’re certainly a bit harder to find that Rawls and friends, though we’d expect that from academia leaning left generally.

I don’t think I fully agree with you that current left circles are opposed to political philosophy in the analytic tradition.  The popularity of stuff along the lines of Standpoint Theory (oppressed people have access to knowledge that privileged people generally don’t) doesn’t have to amount to some sort of radical skeptical position – it can just be an empirical claim similar to “if you want to learn physics, talk to a physicist.”  Of course people take this in many directions, but I don’t think there’s an inherent opposition to clarity or philosophical questioning there, and there are lucid, thoughtful writers out there who do the Standpoint Theory thing.  (Meanwhile, there are many non-lucid leftist writers in academia who take inspiration from French Theory, but I don’t think these people should be conflated with the ones just mentioned.  Compare a popular feminist blog to a selection of MLA Convention papers – they’re participating in very different conversations, with different norms.)

Finally, about Holbo, I completely agree – I read a bunch of his stuff a long time ago and eventually ended up feeling like I’d binged on too much candy.  The zany sugar-high style is a lot less enchanting once you realize there may not be any solid ground beneath it, even in his actual academic papers.  (IIRC his pieces on Zizek’s lack of clarity were about as hard to understand as Zizek, though for very different reasons.)  But that review is a bit of an exception – Holbo does make some identifiable claims about Frum’s book, and they seem important to me, insofar as his description of Frum applies well to a lot of popular conservative writers.  Of course the claims are presented in Eye-Popping HolboVision!!! but that just means you get delicious flavor along with your nutrition.

(via lambdaphagy)

carlyle

Jonathan Rose’s book The Intellectual Life of the British Working Classes is fascinating, and I highly recommend it.  One of the main ideas that comes across in Rose’s book is that much of the British working class in the 19th century read a lot – and read a lot of difficult, deep stuff, and had highly specific taste.  One curious aspect of this working class taste is that they tended to prefer conservative authors, but not necessarily because they were themselves conservative.

Indeed, even leftist agitators among the working class often found inspiration in Edmund Burke and Thomas Carlyle, two of the authors most popular among the working class as a whole.  Burke is famous as a godfather of modern conservatism; Carlyle is much less famous.  It is possible that this is because he is just too reactionary for our modern democratic selves, but listen to Rose:

When the first large cohort of Labour MPs was elected in 1906, the Review of Reviews asked them to name the books and authors that had most deeply influenced them. […] Note that thirteen respondents mentioned Thomas Carlyle … [4th most popular after John Ruskin (17 votes), Dickens (16), and the Bible (14)]
[Carlyle] had a huge following among autodidacts … Carlyle’s ability to attract disciples from all points on the political spectrum, from Communists to Nazis, marks him as an author who might be turned to many purposes… .
One could draw a pacifist lesson from his fable of the sixty French and English soldiers who massacred each other over a trivial territorial dispute.  Carlyle’s hero-worship made him appear a proto-fascist in the eyes of many readers (including Joseph Goebbels) but it inspired [Keir] Hardie to embrace the role of Hero as Proletarian.
From Carlyle, as one agitator proclaimed, the working classes “learnt to hate shams.”  He exposed the ideological facades of the class system, preached independence of mind, and offered a vision of economic justice.
[…] some working-class women found a feminist in Carlyle.

And on and on the examples go, for nearly ten pages in this chapter alone (the entry for Carlyle in the index spans seven lines).

If Carlyle’s popularity is surprising on a political level, it’s much more surprising on a stylistic level.  I suspect the reason Carlyle has fallen into semi-obscurity has less to do with his politics and more to do with the fact that he wrote in a style which is deeply alien to us.

This is not just because he is old.  Many writers of the 19th and 18th centuries are still readable to us.  Dickens (whose Tale of Two Cities was inspired by Carlyle’s French Revolution) still entertains millions.  The Henry Fielding of 1749 sounds like the sort of wag you wouldn’t be surprised to meet in a bar in 2015.

Carlyle is different.  He wrote in a floridly romantic, extremely opaque and long-winded style which (I have heard) was popular in his day and fell out of favor soon after.  I have read the first few chapters of two of his books – Sartor Resartus and The French Revolution.  The former is convoluted but still reads well, but the latter, which was vastly popular, is now practically unreadable.  The book’s style is easier to display than to describe.  Here is how Carlyle expresses the thought we might now phrase as “kingship is a social construct”:

Time was when men could (so to speak) of a given man, by nourishing and decorating him with fit appliances, to the due pitch, make themselves a King, almost as the Bees do; and what was still more to the purpose, loyally obey him when made. The man so nourished and decorated, thenceforth named royal, does verily bear rule; and is said, and even thought, to be, for example, ‘prosecuting conquests in Flanders,’ when he lets himself like luggage be carried thither: and no light luggage; covering miles of road. For he has his unblushing Chateauroux, with her band-boxes and rouge-pots, at his side; so that, at every new station, a wooden gallery must be run up between their lodgings. He has not only his Maison-Bouche, and Valetaille without end, but his very Troop of Players, with their pasteboard coulisses, thunder-barrels, their kettles, fiddles, stage-wardrobes, portable larders (and chaffering and quarrelling enough); all mounted in wagons, tumbrils, second-hand chaises,—sufficient not to conquer Flanders, but the patience of the world. With such a flood of loud jingling appurtenances does he lumber along, prosecuting his conquests in Flanders; wonderful to behold. So nevertheless it was and had been: to some solitary thinker it might seem strange; but even to him inevitable, not unnatural.
For ours is a most fictile world; and man is the most fingent plastic of creatures. A world not fixable; not fathomable! An unfathomable Somewhat, which is Not we; which we can work with, and live amidst,—and model, miraculously in our miraculous Being, and name World.—But if the very Rocks and Rivers (as Metaphysic teaches) are, in strict language, made by those outward Senses of ours, how much more, by the Inward Sense, are all Phenomena of the spiritual kind: Dignities, Authorities, Holies, Unholies! Which inward sense, moreover is not permanent like the outward ones, but forever growing and changing.

Well then!  Believe it or not, this is one of Carlyle’s more lucid moments.  More typical is the first sentence of the third chapter:

For the present, however, the grand question with the Governors of France is: Shall extreme unction, or other ghostly viaticum (to Louis, not to France), be administered?

Or this strange outburst from Chapter 1:

Yes, Maupeou, pucker those sinister brows of thine, and peer out on it with thy malign rat-eyes: it is a questionable case. Sure only that man is mortal; that with the life of one mortal snaps irrevocably the wonderfulest talisman, and all Dubarrydom rushes off, with tumult, into infinite Space; and ye, as subterranean Apparitions are wont, vanish utterly,—leaving only a smell of sulphur!

Imagine huge numbers of working-class autodidacts not only struggling to puzzle out this kind of stuff, but becoming eager fans of it.  This actually happened!  I have to wonder how to explain the gap between these people and us.  When we recoil from Carlyle, are we showing good sense, or have we lost something that these people possessed – or neither?  By 1958, a reviewer (Dwight McDonald) could write:

The long, patient uphill struggle of the last fifty years to bring the diction and rhythms of prose closer to those of the spoken language might never have existed so far as Cozzens is concerned. He doesn’t even revert to the central tradition (Scott, Cooper, Bulwer-Lytton) but rather to the eccentric mode of the half-rebels against it (Carlyle, Meredith), who broke up the orderly platoons of gold-laced Latinisms into whimsically arranged squads, uniformed with equal artificiality but marching every which way as the author’s wayward spirit moved them. Carlyle and Meredith are even less readable today than Scott and Cooper, whose prose at least inherited from the 18th century some structural backbone.

So something happened between 1906 (when Carlyle was popular among Labour MPs) and 1958.  I wonder what it was.  (The first sentence of the McDonald quote gives one possibility – a “long, patient uphill struggle” which, if it really happened, has now largely disappeared from memory.)

(Amusingly, Mencius Moldbug – another guy who never uses five words when five pages will suffice – loves Carlyle.  I wonder if he’s read Rose’s book?  He writes that “the basic reason Carlyle is not in your high-school English reader, whereas [Walt] Whitman is, is that Carlyle was what, here at UR, we call a reactionary,” which seems unlikely given his popularity across the political spectrum.  It seems more likely that people used to find the Carlylean style inspirational, and now it verges on intolerable.)

acatinulthar:
“itsrosewho:
“FAMOUS AUTHORS
• Classic Bookshelf: This site has put classic novels online, from Charles Dickens to Charlotte Bronte.
• The Online Books Page: The University of Pennsylvania hosts this book search and database.
• Project...

acatinulthar:

itsrosewho:

FAMOUS AUTHORS

  • Classic Bookshelf: This site has put classic novels online, from Charles Dickens to Charlotte Bronte.
  • The Online Books Page: The University of Pennsylvania hosts this book search and database.
  • Project Gutenberg: This famous site has over 27,000 free books online.
  • Page by Page Books: Find books by Sir Arthur Conan Doyle and H.G. Wells, as well as speeches from George W. Bush on this site.
  • Classic Book Library: Genres here include historical fiction, history, science fiction, mystery, romance and children’s literature, but they’re all classics.
  • Classic Reader: Here you can read Shakespeare, young adult fiction and more.
  • Read Print: From George Orwell to Alexandre Dumas to George Eliot to Charles Darwin, this online library is stocked with the best classics.
  • Planet eBook: Download free classic literature titles here, from Dostoevsky to D.H. Lawrence to Joseph Conrad.
  • The Spectator Project: Montclair State University’s project features full-text, online versions of The Spectator and The Tatler.
  • Bibliomania: This site has more than 2,000 classic texts, plus study guides and reference books.
  • Online Library of Literature: Find full and unabridged texts of classic literature, including the Bronte sisters, Mark Twain and more.
  • Bartleby: Bartleby has much more than just the classics, but its collection of anthologies and other important novels made it famous.
  • Fiction.us: Fiction.us has a huge selection of novels, including works by Lewis Carroll, Willa Cather, Sherwood Anderson, Flaubert, George Eliot, F. Scott Fitzgerald and others.
  • Free Classic Literature: Find British authors like Shakespeare and Sir Arthur Conan Doyle, plus other authors like Jules Verne, Mark Twain, and more.

TEXTBOOKS

MATH AND SCIENCE

CHILDREN’S BOOKS

  • byGosh: Find free illustrated children’s books and stories here.
  • Munseys: Munseys has nearly 2,000 children’s titles, plus books about religion, biographies and more.
  • International Children’s Digital Library: Find award-winning books and search by categories like age group, make believe books, true books or picture books.
  • Lookybook: Access children’s picture books here.

PHILOSOPHY AND RELIGION

PLAYS

  • ReadBookOnline.net: Here you can read plays by Chekhov, Thomas Hardy, Ben Jonson, Shakespeare, Edgar Allan Poe and others.
  • Plays: Read Pygmalion, Uncle Vanya or The Playboy of the Western World here.
  • The Complete Works of William Shakespeare: MIT has made available all of Shakespeare’s comedies, tragedies, and histories.
  • Plays Online: This site catalogs “all the plays [they] know about that are available in full text versions online for free.”
  • ProPlay: This site has children’s plays, comedies, dramas and musicals.

MODERN FICTION, FANTASY AND ROMANCE

FOREIGN LANGUAGE

HISTORY AND CULTURE

  • LibriVox: LibriVox has a good selection of historical fiction.
  • The Perseus Project: Tufts’ Perseus Digital Library features titles from Ancient Rome and Greece, published in English and original languages.
  • Access Genealogy: Find literature about Native American history, the Scotch-Irish immigration in the 19th and 20th centuries, and more.
  • Free History Books: This collection features U.S. history books, including works by Paul Jennings, Sarah Morgan Dawson, Josiah Quincy and others.
  • Most Popular History Books: Free titles include Seven Days and Seven Nights by Alexander Szegedy and Autobiography of a Female Slave by Martha G. Browne.

RARE BOOKS

  • Questia: Questia has 5,000 books available for free, including rare books and classics.

ARTS AND ENTERTAINMENT

  • Books-On-Line: This large collection includes movie scripts, newer works, cookbooks and more.
  • Chest of Books: This site has a wide range of free books, including gardening and cooking books, home improvement books, craft and hobby books, art books and more.
  • Free e-Books: Find titles related to beauty and fashion, games, health, drama and more.
  • 2020ok: Categories here include art, graphic design, performing arts, ethnic and national, careers, business and a lot more.
  • Free Art Books: Find artist books and art books in PDF format here.
  • Free Web design books: OnlineComputerBooks.com directs you to free web design books.
  • Free Music Books: Find sheet music, lyrics and books about music here.
  • Free Fashion Books: Costume and fashion books are linked to the Google Books page.

MYSTERY

  • MysteryNet: Read free short mystery stories on this site.
  • TopMystery.com: Read books by Edgar Allan Poe, Sir Arthur Conan Doyle, GK Chesterton and other mystery writers here.
  • Mystery Books: Read books by Sue Grafton and others.

POETRY

  • The Literature Network: This site features forums, a copy of The King James Bible, and over 3,000 short stories and poems.
  • Poetry: This list includes “The Raven,” “O Captain! My Captain!” and “The Ballad of Bonnie and Clyde.”
  • Poem Hunter: Find free poems, lyrics and quotations on this site.
  • Famous Poetry Online: Read limericks, love poetry, and poems by Robert Browning, Emily Dickinson, John Donne, Lord Byron and others.
  • Google Poetry: Google Books has a large selection of poetry, fromThe Canterbury Tales to Beowulf to Walt Whitman.
  • QuotesandPoem.com: Read poems by Maya Angelou, William Blake, Sylvia Plath and more.
  • CompleteClassics.com: Rudyard Kipling, Allen Ginsberg and Alfred Lord Tennyson are all featured here.
  • PinkPoem.com: On this site, you can download free poetry ebooks.

MISC

  • Banned Books: Here you can follow links of banned books to their full text online.
  • World eBook Library: This monstrous collection includes classics, encyclopedias, children’s books and a lot more.
  • DailyLit: DailyLit has everything from Moby Dick to the recent phenomenon, Skinny Bitch.
  • A Celebration of Women Writers: The University of Pennsylvania’s page for women writers includes Newbery winners.
  • Free Online Novels: These novels are fully online and range from romance to religious fiction to historical fiction.
  • ManyBooks.net: Download mysteries and other books for your iPhone or eBook reader here.
  • Authorama: Books here are pulled from Google Books and more. You’ll find history books, novels and more.
  • Prize-winning books online: Use this directory to connect to full-text copies of Newbery winners, Nobel Prize winners and Pulitzer winners.

Via wirehead-wannabe

(Source: iheartintelligence.com, via acatinulthar-deactivated2015092)

lambdaphagy:

su3su2u1:

lambdaphagy:

Starting over with a new post, because tumblr is broken.

su3su2su1:

I think Shalizi’s objection covers a lot of what I’d say- basically reification of factor analysis is a mistake.  It’s a description of the data, not a model.  Causal discovery is a hard problem!  

I think the response to the Shalizi piece sort of misses the point. 

Ready to make an entrance so back on up

(Cause you know we’re about to rip shit up)

Read More

So yes, factor analysis does constrain the possible models, but I think it’s actually sort of difficult in almost all cases to figure out what those constraints might be.  

 Like, in the case of g, one underlying factor mostly works as a model (with a few epicycles tacked on- note that Spearman’s two factor model has been obviously falsified.  The Jensen “g” just says ‘there is a positive manifold’, which I think isn’t so interesting a model.  You’d have guessed that from a non-psychometric idea of intelligence). 

BUT, everyone seems to agree (well, Jensen, Shalizi, and that response you linked to earlier) that hundreds of independent factors, with test batteries testing dozens of randomly sampled factors also fits the data (I’m not saying the hundred factor model is right or even plausible just that it’s as different as possible and still works.).  

The difference is that Jensen and Dalliard say something like -this is just a different model of underlying g, there is still a large factor in the data! (to quote the response you linked to “regardless of whether “neuro-g” is unitary or the result of sampling, people differ on a highly important, genetically-based dimension of cognition that we may call general intelligence”)

 Shalizi, on the other hand,  says  something like - wait, these are very different models, and it’s worrisome we can’t tell them apart with the way the analysis is being done! And the way researchers talk about g implicitly leads readers to think of the one-factor model, not the sampling model!  

(Another model that fits the data well is that you have many intelligences that are screened by a common “test taking” factor)

Not mentioned by Shalizi- but I also think the Flynn effect is a clear demonstration that the relationships between a test battery and g-loadings drifts over time (and therefore drifts across cultures).  This screws up a lot of common analyses I’ve seen used in the literature (Jensen’s method of correlated vectors, for instance, is very sensitive to the specific g-loadings.  You can actually create spurious correlations just by dropping out tests from the batteries and recalculating. I’m actually super dubious on the idea that method of correlated vectors makes any sense at all).  

Just so we’re on the same page here before getting any further, Jensen (and Dalliard following Jensen, I guess) reject the Thomson sampling model because some pairs of seemingly disparate tests (which would seem to share few factors in common) are highly correlated, whereas other seemingly similar tests (sharing many factors) correlate less well. IIRC Raven’s matrices are a better predictor of backward digit span than forward digit span, which seems weird if all these tests are recruiting a bunch of independent, task-specific mental factors. Now, that relies on folk intuitions about cognition, which we should always take with a boulder of salt, but it’s still a bit of weight against the Thomson model. A clearer example, I think, is the correlation between Raven’s and neural conduction velocity from the retina to visual cortex, despite the latter requiring no high-level cognition whatsoever. Reaction times in general also seem difficult to square with ‘many independent but overlapping competencies’.

From an evo-devo point of view I actually have some prior sympathy for the Thomson model or something like it, and I think it’s worth trying devise ways of discriminating between them. Both camps claim evidence from trauma and pathology as points in their favor. I aso wonder whether inbreeding depression also might provide a clue, though I haven’t thought either of these issues through.

In any case, I agree that most people in most places would have agreed that there is a positive manifold, because most people in most places had a folk theory of intelligence according to which people are more or less quick or slow. It’s pretty much only within our social milieu that such things are denied. Charitably, you might imagine that this arises from a restriction of range effect. Among a university student body, you really might see negative correlations on subtests because of the simple fact that a positive correlation between rvs can go negative when you condition on the sum, which is basically what an admissions process does. Less charitably, you might suspect that the positive manifold is denied for other reasons.

I hate to keep bringing this up, but Shalizi himself presumes that it’s entirely an artifact of test construction, a point no other serious figure in this debate is willing to endorse. So what’s obvious to both of us, from a non-psychometric point of view, was extremely non-obvious to Shalizi, so much so that he committed a fatal error in a very public way. I am loathe to psychologize people who disagree with me, but when it comes to accounting for why I seem to understand a statistical issue better than Cosma Freakin Shalizi, I do not see too many other options.

Requisite humility disclaimer that I still cannot quite believe that Shalizi went so wrong here, so I would be grateful for any alternative explations.

I can’t tell where you’re getting some of the positions you ascribe to Shalizi — e.g. I don’t think he presumes that the positive manifold is merely an artifact of test construction.

What he says is something more like “the fact that some things we intuitively call ‘intelligence tests’ are positively correlated is an interesting discovery, but there is a risk of getting into circular reasoning by, having discovered this, redefining ‘intelligence test’ to mean ‘something that correlates well with the other members of the group’ rather than ‘something that intuitively seems like an intelligence test’.”  (E.g., we might, at least in principle, exclude what might have seemed like a perfectly good intelligence test at the outset because it doesn’t positively correlate with all the others, and thus is “not measuring general intelligence.”)  Here is the passage I’m getting this from (link to the post):

The psychologists start with some traits or phenomena, which seem somehow similar to them, to exhibit a common quality, be it “intelligence” or “neuroticism” or “authoritarianism” or what-have-you. The psychologists make up some tests where a high score seems, to intuition, to go with a high degree of the quality. They will even draw up several such tests, and show that they are all correlated, and extract a common factor from those correlations. So far, so good; or at least, so far, so non-circular. This test or battery of tests might be good for something. But now new tests are validated by showing that they are highly correlated with the common factor, and the validity of g is confirmed by pointing to how well intelligence tests correlate with one another and how much of the inter-test correlations g accounts for. […] By this point, I’d guess it’s impossible for something to become accepted as an “intelligence test” if it doesn’t correlate well with the Weschler and its kin, no matter how much intelligence, in the ordinary sense, it requires, but, as we saw with the first simulated factor analysis example, that makes it inevitable that the leading factor fits well.

You say in your first post above that it’s impossible to construct “intelligence tests” (in the intuitive sense of the phrase) that don’t turn out to have the positive correlations.  If this is true, it removes the concern about circularity, and is an interesting and non-trivial result.  (Shalizi seems to believe this is false, cf. his footnote 13.)  But in any case Shalizi is just raising circularity as a concern, not saying that the entire enterprise is circular and so the tests give no information about intelligence (cf. the first half of the quoted paragraph).

A totally unrelated and perhaps less boring point — in your first post you write

I think g is at least as real as sportiness, if that helps to clarify my position. We know that g is highly polygenic, just as sportiness is highly “polydesigned”; the true models in each case might be closer to hundreds of causal factors, each adding a small positive contribution. For mental tests even more so than sportiness, though, that model still gives you a practically one-dimensional manifold, which seems like a useful fact worth capturing. If I had to guess the true model for intelligence right now, I might say something like “many genes independently influence ‘neural processing speed’, which in turn influences the subtests.” That’s not the whole story, of course, but it’s not that crazy either in light of the correlations with reaction time. Would that make me an IQ realist?

It seems to me like you move from a weaker claim to a stronger one over the course of this paragraph.  At first, you say that g is as real as sportiness — where “sportiness” is something Shalizi used as an example of a PCA factor which we can readily interpret and understand conceptually, and which is not a causal factor in that intuitive analysis, and in fact fails confirmatory factor analysis.  (That is, CFA “works” by telling us that something we know isn’t a causal factor is, in fact, not a casual factor.)

Then you go on to present a guess about the true model, in which there is one causal factor (“neural processing speed”) influencing all the subtests.  This is the kind of thing that, if it is true, should at the very least should pass CFA tests for single-factorness.  That is, something like this has to be “more real than sportiness,” in that it would produce, in addition to the positive manifold, other correlation structures that we could look for.  When you postulate a causal graph, you postulate something with testable consequences.  Shalizi’s point with the car example is that a causal graph with “size” and “sportiness” pointing toward every other variable makes predictions that are falsified by the data.  Your ”neural processing speed” guess would also make predictions, and assuming these would be confirmed (i.e. that your causal graph is actually true, or at least not demonstrably false), the general factor would be more real than sportiness.

(ETA: You say “that’s not the whole story, of course,” so I suppose you could say: “well I’m not claiming there’s really only one factor, just making a simplified model in which there is only one factor.”  But at this point, in what sense is your model valuable?  If you’re not claiming that there really is only a single factor, then you’re simply redescribing the positive manifold in misleading causal terms.  It’s like saying “in my simplified model, size and sportiness cause all properties of cars” – I guess one can say this if one wants, but it’s clearly not a good description of what is going on [we know that sportiness does not in fact cause anything], and I’m not clear on the value of making such a statement.)

This point is distinct from the fact that IQ is correlated with many life outcomes and can be used, with some success, to predict those outcomes.  (No serious person in this debate denies that, and Shalizi explicitly affirms it at the end of his post.)  The point is that, if you take a set of PCA factors and make up a causal graph based on them, that causal graph then makes extra predictions which you have to go back and test.  (This was how Spearman’s original model, with only one factor, was falsified — and by the same token any purely one-factor model, e.g. “neural processing speed causes everything,” won’t work.)

(via lambdaphagy)

chroniclesofrettek asked: Do you think adding up bad things among people is possible, in principle?

slatestarscratchpad:

nostalgebraist:

Not necessarily “adding up,” per se, but some sort of aggregation, yes.  N+1 people suffering the same pain is worse than N people.

I think what is often missing from this kind of conversation is an explicit accounting for preferences.  Maybe because of the VNM axioms people think they can switch back and forth between utility and preferences and it’s all the same, but that only applies to one person.  For aggregation, preferences seem important.

E.g. in the dust speck problem my argument was that every one of the 3^^^3 people would choose dust speck over torture, and this shouldn’t aggregate to torture over dust speck.  Of course, for each person one could translate these prefs into utilities via VNM, so then adding up utilities would also give you the right answer.  But that’s assuming that each person knows what’s going on; if they thought the choice was “dust speck vs. no dust speck” they’d choose the latter (and the utilities would add up to torture).

So here is where it gets tricky: I think the preferences have to be considered even if the people don’t know what is going on: that is, the fact that they would object to the full situation if presented with it means that their prefs shouldn’t aggregate to choosing torture.  It’s hard to know how to frame this in consequentialist terms, because it involves counterfactuals; perhaps the only real consequences these people feel are dust specks or their absence.  But I think the fact that each person would individually choose dust speck means that dust speck must be chosen in aggregate, though I don’t know how to formalize it.

I’m having trouble interpreting what you mean by sainy that “each person would individually choose dust speck”.

If you mean each person would choose a 100% chance of dust speck for themselves over a 100% chance of torture for themselves, then that doesn’t seem any more significant than that every member of the lottery-eligible population would choose “buying a ticket and winning” to “not buying a ticket and not winning”. You can’t aggregate that to prove that buying a ticket is correct.

If you mean each of the 3^^^3 people in the actual scenario would vote for the dust speck branch, that’s trivially wrong - Eliezer is a person and he would vote torture.

If you mean each of the 3^^^3 people in the actual scenario would vote for the dust speck branch if they were being completely selfish rather than trying to do complex moral reasoning, I don’t think that’s right either. Many would (correctly) note that 1/3^^^3 chance of torture is so low it might as well be zero, so faced with a choice between “torture someone else” and “dust speck to me” the purely selfish person would choose the first.

Am I misunderstanding you?

I meant the second option.  I’m being less explicit then I thought I needed to be, but apparently that has meant not being comprehensible to anyone.  A bit more explicitly:

When I talk about “everyone” voting for the dust specks, that is not a confident assertion that literally everyone would in a large population.  I am referring to two things at once:

a) the idea that most people, if they hadn’t heard some philosophical argument about this before, would vote for the dust speck branch because it seems like a no-brainer (cf. non-LW people’s responses when I’ve brought up this thought experiment — everyone’s like “Christ just do the dust specks, why on earth would you think otherwise”) (I’ll come back to why this idea matters)

b) the fact that it is possible to construct a version of the scenario in which everyone literally agrees.  For instance, if they were all copies of me.  The original statement of the problem under-specifies this: it just talks about “people.”  The kind of analysis people tend to apply to get the torture choice seems like it would apply equally well to this hypothetical case.  Yet in this hypothetical case, every one of the people individually wants dust speck.  This seems bizarre.

If the answer to this conundrum is “well really we need to look at who the people are and what they believe before concluding,” then that (partially) proves my point — you can’t look at the original question, which simply talks about “people,” and immediately conclude “torture.”  The people could all be copies of me; you weren’t told they aren’t.  (Maybe there’s an implicature that they’re somehow a “normal population” but what is a “normal population” of 3^^^3 people, is it like zillions of earths, or just zillions of Americas, or or zillions of unimaginable post-singularity beings, or what — this matters)

The other option here is biting the bullet and saying “even if the people are all copies of rob nostalgebraist, torture is still the right option.”  lambdaphagy makes an argument in favor of this: if you change the problem to asking 3^^^3 people “should you get a speck with probability 1 or be tortured with probability 1/(3^^^3),” most people (including me) would tend to choose the latter, but if sufficiently many people are asked this question, someone’s going to get tortured.

However, it’s not at all obvious to me that this is the same problem.  This problem is about personal risk, like the risks we all take in driving cars (creating a world in which car crashes occur).  But the probabilistic choice for an individual simply isn’t the same as the non-probabilistic choice.  I drive a car from time to time, but if you tell me: “either you have to walk to where you’re going this one time, or someone else will get in a wreck with probability 1,” I’d choose the former.  One can try to inter-convert the two by iterating: a lifetime of choosing driving over walking might add up to someone getting in a wreck with high probability.

But I just don’t see that these things are obviously the same.  Crucially, the iterated case asks if adding up enough inconveniences for one person can get you something as bad as a wreck.  This is the “problem of comparing Small Bads to Big Bads” and it’s a problem worth talking about, but it’s all in one person’s life.  Adding up Small Bads over many people and getting a Big Bad is different because each individual (if they’re me) will prefer a Small Bad for them to a Big Bad for someone else, and also knows that all the other copies of them will think the same.  There is no intuitive pressure to add up the Small Bads across people to get a Big Bad, and in fact the intuitive pressure is the opposite, since that would violate the preferences of every single copy of me.

(A note in passing: this kind of thing does get almost circular, insofar as making ethical decisions involves looking at what people believe about ethical decisions; making the choice involves trying to decide what everyone else thinks about the choice.  Some amount of “other people’s minds are like mine” has to be done here, but that’s life — we do it all the time.)

Finally, to come back to why my point a) above matters.  My earlier posts have gotten responses like “well you don’t know what all those people believe, maybe they have different philosophies,” which is true.  But the reason this thought experiment is so inflammatory and (supposedly) illustrative is that we have a strong intuition that most people would choose speck if asked; the idea that torture is the “correct” option is counter-intuitive because it’d take a very unusual individual to prefer it.  So aggregation can result in “correct” answers that very few people, or even no people, would individually want: I take that to be sort of the point of the thought experiment (“I bite the bullet on this seemingly absurd conclusion”).  If you start bringing up the idea that maybe a lot of the people wouldn’t individually choose speck, you’ve defanged the problem: you’re now imagining a world where people don’t generally find the problem counter-intuitive, when the whole point of the problem was getting a counter-intuitive result.  You can imagine such a world if you want, but it seems to run counter to the spirit of the problem, and simply is not all that interesting.  Either the result is counter-intuitive or the world is filled with bizarre non-speckers.  I’m not sure why the latter choice is worth thinking about, even to the extent that any of this is.

(N.B. block “speckgate 2015” if you don’t want to see any more posts like this, which would be entirely understandable)

slatestarscratchpad:

nostalgebraist:

[snip]

This seems kind of like cheating to me.

Consider a bunch of statements like “electromagnetism is stronger than gravity” which, if it were otherwise, it would be impossible to have an orderly universe that supports life.

It seems intuitively obvious (I KNOW, I’M SORRY) that this could be the opposite - gravity could be stronger than electromagnetism. Or you could even say something like “The value of gravity is x, but it seems intuitively possible for it to be one-half x”. You could set up this space of “Here are a thousand dimensions it intuitively seems like the universe could vary on; once I look at the universe varying along those dimensions I notice that only a very small percent of the results allow life.”

I feel like you could do two things with these intuitions. First, you could accept them as accurate, in which case we probably have enough of these degrees of freedom that the fine-tuning argument stands

Second, you could reject the intuitions. You could say “Although it certainly seems like gravity could be one-half x, this is an illusion. Due to deeper-level math that we don’t understand yet, it is logically necessary that gravity be x” or “It is logically necessary that electromagnetism be stronger than gravity”.

But if you do that, you’re kind of recursively passing the buck. Okay, so some mathematical superlaw says that the laws of gravity have to be what they are. But then you have a fine-tuning argument from the superlaws.

Even if the superlaws are mathematically necessary, it still seems like there’s this extra fact to be explained - that the mathematically necessary thing is also the thing that uniquely allows life to exist.

If this is true, then it seems either that mathematics is somehow teleologically aiming at the existence of life - that is, there’s a necessary correlation between the laws of mathematics and the things that make life possible - or you end up with another weird fine-tuning coincidence.

I guess my opinion is that the buck has to stop somewhere, or else you get into the “God created the universe, but what created God?” type of problem.

Say one believes that fine-tuning implies that there must have been some higher-order mechanism producing a whole bunch of universes, most of which have no life.  Okay, that mechanics itself has dynamics, and presumably there are ways it could have been set up so that it would produce zero universes with life.  So do we invent an even-higher-order mechanism?  When does this process stop?

I guess if “fine-tuning” is the signal for the process to continue, then we stop once we get something that is not fine-tuned.  But once we are thinking about these very speculative things, how can we determine that?  "The evolution dynamics for the universe-evolving dynamics could not have been anything but what it was" is hard to declare when we have no a priori idea of what an “evolution dynamics for universe-evolving dynamics” should/could look like.

It might clarify things to say that part of why I don’t like this argument is that it’s always applied to the parameters of physical laws, but never to the mathematical form of the laws themselves.  It’s like people are implicitly assuming that someone/something set up the equations of physics first, with a bunch of “slots” for undetermined numbers, and then determined the numbers later, like rolling a D&D character (with the D&D rules already established).  But are the laws themselves inherently less contingent than the parameters?  If we “could have” a universe with a different gravitational force strength, couldn’t we just as equally have had a universe with, like, five different flavors of gravity, or one where the equation we know for gravity has a bunch of extra complicated terms, or, well, anything?

The only reason I can see for considering variations in parameters, but not variations in laws, is that there’s an intuitive metric (“sense of scale”) in the former case but not the latter.  We can easily say “ah, suppose that parameter were 5% larger … ” but if we start to imagine adding some weird extra term to the Einstein field equations, that feels less like something that “could be true.”  In particular, we have no obvious way to quantify “how different” that equation would be from the real one – what does it mean for an equation to “change by 5%”?

But I think this perception of difference is in fact an illusion that comes from working with things like machines or games in which there is a relatively fixed setup that includes some adjustable “knobs.”  Physical laws aren’t split up this way: they just are what they are, and they don’t come identified with bits labelled “knobs you can twiddle.”  We think of the parameters (like the relative strength of forces) as “knobs” only because it’s relatively easy to imagine them being different.

To get back to the main point – I think it only makes sense to say “gravity could have been stronger” if you’re also willing to admit possibilities like “gravity could have obeyed an equation with 10^500 non-negligible terms” or “there could have been 19 distinct forces all of which are described by mathematical structures we haven’t invented yet.”  The “intuitively obvious” universes where (say) gravity was stronger but everything else was the same are just a tiny little neighborhood in a vast, infinite-dimensional space of possibilities.  And it’s impossible to say what fraction of this space can support life (it’s not even clear how to define what we’d mean by a “fraction” of it!).  At this point the fine-tuning argument deteriorates from “we are very special among all alternatives” to “we are special in the context of our miniscule neighborhood.”

ETA: that was rambling.  A summary of the core point might be: “if you really take into account all of the dimensions along which things could conceivably vary, it actually breaks the fine-tuning argument rather than strengthening it, because the resulting space is way too big for us to draw many conclusions about it.”

(via slatestarscratchpad)

gentlemantiger:

nostalgebraist:

gentlemantiger:

nostalgebraist:

People are talking about Max Tegmark’s Level IV Multiverse on the dash, and all I can think of when I hear about that is Scott Aaronson’s quip:

As Tegmark acknowledges, the central difficulty with doing so is that no one has any idea what measure to use over the space of mathematical…

I’m a super layperson on this, but my understanding of the life parameters was that literally, if the universe didn’t have molecular bonds X amount strong, or there was this much push in another direction from physics, physical matter wouldn’t be capable of taking the shapes and modes needed for life as we know it on earth.

‘Fine tuned’ as a descriptor drives me crazy, since it implies an intelligence or inevitability to things, instead of just being a lucky break. If you have an infinite number of universes, that means some universes got 'lucky’ and have life in them.


If I’m just chirping stuff at you already know, I’m sorry. My own personal biases (as a layperson) lean towards there being multiple universes, but that’s as much that I’m a fan of the idea in fiction as anything else. And like I said, I’m mostly a layperson with some interest in this from an outside perspective.

That’s all true — what I’m saying is that because these are facts that can vary continuously, we need a sense of scale in order to say they’re “fine-tuned” rather than “coarse-tuned.”

E.g. Wikipedia’s page for “Fine-tuned universe” says that ” … the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.”  Now, I guess 4% sounds “small,” and 0.5% sounds even smaller.

But in some physical cases these deviations are not “small” at all.  Engineering tolerances sometimes need to be smaller than this (the page here describes a case where one needs precision of better than ±0.5%), and the manufacturing process can be set up to guarantee this.  Or, if we want to get really extreme: in equilibrium statistical mechanics, fluctuations scale like the square root of the number of particles, so if we have 10^23 particles (roughly a mole), the “tuning precision” in the sense above will be sqrt(10^23) / (10^23), which is on the order of 10^(-10)%.  In that context, a fluctuation of 0.5% would be staggeringly huge, not small.

So, in order to decide whether a figure like “it couldn’t vary more than 0.5%” counts as “fine tuning,” we need some sense of what a “typical” or “relevant” deviation is in the physical process at hand.  In the case of the universe, that would have to be some process that produces universes with different parameters.  If we know there’s such a process, we already know there’s a multiverse; if we don’t know of such a process, we can’t establish the relevant scale, and hence we can’t declare the tolerances “small” and conclude there’s a multiverse on that basis.

Ok, that makes a lot more sense, thank you!

I’m curious here though, is there say, a meaningful difference in the idea that more than one universe could exist, and that one is 'created’ whenever an event occurs and is therefore necessary for quantum physics?

Because the latter does seem pretty BS to me, but it doesn’t seem that implausible to think that if one universe came into existence, that the process that did that might do that a bunch more times.

Disclaimer: I don’t know what I’m talking about, yadda yadda, layperson.

There are several different types of multiverse ideas out there, and they’re pretty much totally unrelated, though they’re sometimes grouped together because they’re all about multiverses.  (Tegmark refers to these as different “levels” – I think the quantum one is his “Level III”)

The kinds of ideas we’re talking about are ones in which there are multiple beginnings to a universe, like multiple big bangs, resulting in different physical parameters.  This is the kind of thing you can talk yourself into if the fine tuning arguments make sense to you (though that isn’t the only argument for them).

The quantum thing is totally different – it’s one way (the “Many Worlds Interpretation” or MWI) of interpreting the fact that in quantum mechanics there seem to be objects out there that are in “combinations” of different states, and when we observe them, we end up seeing a single one of those states, chosen at random.  The MWI take on this is (roughly) that there are multiple copies of us and multiple copies of the object, and when we look at the object, we get “linked to it” so that each copy of us sees only one copy of it.  Thinking of this as “universes getting created” is sort of misleading, I think – it’s more that objects (including us) can exist “in parallel” and when we observe something our parallel selves get one-to-one linked to its parallel selves.

There are problems in placing MWI on a physically rigorous footing, but from what I’ve seen this is also true of any other attempt to make sense of quantum mechanics, besides the neutral position of “just crunch numbers and don’t worry about it.”  So if you’re wondering what to think of MWI as a layperson, I’d say the right attitude is something like “it was a very good idea that unfortunately we haven’t gotten to work properly, at least not yet, but the alternatives don’t work much better, and the whole thing is still a mystery.”

In any case, the MWI “multiverse” is totally distinct from the “multiple big bangs” multiverse.

(Disclaimer: I have a physics degree, but I’ve never focused on this stuff and have actually read much less about it than a lot of interested laypeople have)

(via 351399021-deactivated20180818)

queenshulamit-deactivated201602 asked: What is your response to people who say MIRI diverts people's attention from the more immediately pressing concern of climate change?

slatestarscratchpad:

nostalgebraist:

slatestarscratchpad:

I wonder if these same people ever worry that, let’s say, poverty relief or feminist activism diverts people’s attention from the more immediately pressing concern of climate change. If so, they get +2 consistency points - but then I wouldn’t expect them to talk about MIRI, since in terms of totally number of dollars / hours of effort put in it’s about 0.01% of the other two.

(I wonder if saying “MIRI diverts people’s attention from the more immediately pressing concern of climate change” diverts people’s attention from the more immediately pressing concern of poverty relief diverting attention from the more immediately pressing concern of climate change.)

But I actually think the situation is even better than that. I think that something like feminist activism funges strongly against climate change, since it’s using the time of political activists who are good at raising awareness in the general public and getting political stuff done.

Something like MIRI funges very weakly against climate change, because it’s getting meta-mathematics geeks to write proofs and maybe a few people to donate money. At this point I don’t think the climate change movement really needs either of those things. It’s so well-funded that MIRI’s million or so would be a tiny drop in the barrel, and although it’s possible that meta-mathematics geeks could, with some retraining, become climate simulation geeks, I don’t really think the lack of sufficiently good climate simulations is what’s holding global action against climate change back.

In other words, this seems a lot to me like motivated reasoning - “MIRI is weird, therefore MIRI is bad, therefore let me find some reason MIRI is bad, even if I would never consistently apply that reasoning to anything else.”

I have made this criticism before, so I want to explain myself a bit.

You’re right about the fact that better climate models won’t in themselves spur global action on climate change.

However, I think it’s important to be aware of the fact that making better climate models is very much a problem that requires math/physics type talent (not really in overlapping areas with what MIRI does, but the kind of thing I’m sure MIRI’s people could work on competently), and that in some ways this is an understaffed problem.  (Don’t particularly want to get into an argument about that claim right now — just stating that it is what I believe, and it is what some people in the field believe)

The margin at which the critique make sense is not “who should I give money to?” but “it seems that I am good at math; what I should do with this ability?”  MIRI and its supporters argue that MIRI is not just a fun job to take, but an important one, because it’s about an important understaffed problem.  But even if that’s true (arguable, but that’s a whole distinct argument), making better climate models is also an important understaffed problem.

(I know that sounds weird, because climate change itself is very well known.  But it’s also extremely difficult theoretically, and not the kind of thing whose theoretical side is well-popularized in the way that, say, the theoretical side of fundamental physics is well-popularized.  Young physics students dream of accelerators, not cloud parameterizations.)

Finally, I think the point about “better climate models won’t in themselves spur global action on climate change” applies to some extent to MIRI as well — building the theory in itself does nothing to ensure that people will use it.  But I would argue that improving climate simulations is a very important task even if the world isn’t taking action on them — because even if we just sit back and passively react to climate change rather than trying to stop it, it will be very useful to be able to predict it.  Even if, say, coordination problems prevent everyone from getting their shit together enough to stop a certain area from becoming unlivable, it would be very helpful — for governments and for people — to know with high certainty, and as far in advance as possible, that that area will become unlivable.  

By contrast, I don’t think Friendly AI theory has the same kind of applicability in a world in which it is not used preventatively.  (In fact, I think in Yudkowksy’s view it explicitly wouldn’t be: an Unfriendly seed AI, once built, would quickly grow beyond our comprehension or control, so no theory will be able to give us predictions of the “yeah we ended up in the Bad Timeline but here’s exactly how the badness will unfold, use this to protect your loved ones” type.)

Thanks for the explanation. What you say about trying to predict global warming, even if you can’t stop it, seems right, and makes much more sense than what I thought queenshulamit’s hypothetical person was saying.

But I still disagree with you. How much high-level math talent do you think graduates a year? 20,000 people? 30,000? The NSA gobbles up a few thousand for cryptography, Jane Street gobbles up a few thousand for investment banking, Google gobbles up a few thousand for software engineering, academia gobbles up a few thousand for pure research. In what world is it worth worrying about MIRI getting, like, one person per year for a cause that’s probably better than any of those guys?

The strongest counterargument I could think of is that only a few of those thousands of mathematicians are interested in doing good, and those are the ones passing up jobs with Google to look at things like global warming or AI risk.

But I don’t really think that’s how it works. Except for a couple of very special people on GiveWell, I don’t think people are inspired to do good, and then they look for the most efficient charitable way to pursue that goal. I think that get interested in a specific cause, and then their attachment to that cause inspires them to separate from the packs going to Google and the NSA and accept a lower salary pursuing their dream.

Eliezer’s already said that if he didn’t believe in AI risk, he’d be a physicist. A lot of the people who come to MIRI come to it from Google, and a lot of the people who leave MIRI leave it for Google. Nate’s bio ( http://lesswrong.com/lw/jl3/on_saving_the_world/ ) doesn’t really leave a lot of room for expecting him to become a climatologist. On the other hand, how many climate modellers do you know who seriously considered whether they should work in the field of AI risk instead?

I think that, with the possible exception of you, it’s very likely that there is not a single mathematician in the world who has ever seriously considered, even for the space of a single thought, whether to work for AI risk or climate modelling. There are a lot of mathematicians who have considered whether to work for Google or climate modelling, and a lot who have considered whether to work for Google or MIRI, but just by statistics alone - let alone the different personalities both causes attract - we wouldn’t necessarily expect them to overlap.

Three quick points here — then I’ll have to bow out for the rest of the day because I need to get myself off tumblr and work.  (Usual work void rules apply: yell at me if you see me tumbling)

Point 1: The size/influence of MIRI is just not relevant at the margin I’m considering.  The question I’m asking is “what should one person who’s good at math choose for their career?”, and that one person will be spending 100% of their work time working for MIRI if they choose to work for MIRI.  At this margin it is, yes, worth worrying about whether MIRI is the right choice or not.  If you make a suboptimal choice, the utilitarian analysis doesn’t care whether your particular suboptimal choice is taken rarely or often.

Point 2: I think you are underestimating the transferability of skill between the relevant domains here.  When EY says he “would have been a physicist” he is saying what many thousands of people like me say before getting degrees in physics and then going on to do something else.  ”Physics” in the narrow sense is vastly overstaffed, and the skills you learn when you get a physics degree largely overlap with many other, less overstaffed domains.  (I briefly considered going to engineering grad school, and was astonished when the head of my undergrad physics department told me that my lack of an engineering degree would help rather than hurt me in admissions, because a physics degree is treated roughly like a better version of an engineering degree.)  Many, many people who once “wanted to be physicists” now work in engineering, software, data science, etc. (su3su2u1 can probably talk more informatively about this.)

I realize this is kind of ironic given the stance I was taking in the IQ debate, but the longer I’ve been in math-related fields, the more I realize “being good at math” really is a transferable and widely applicable skill.  I learned all of the climate-related stuff I know in grad school because few people teach it undergrads; many of the professors I work with on climate stuff have physics degrees, in some cases even physics Ph.Ds.  Paul Christiano has a BA in math from MIT and is now a Ph.D. student in computer science and working at MIRI; I’m sure he could learn climate fundamentals in a few years just like I did (and would likely be better at it than I am).

3.  I think your response mixes together facts and values in a confusing way.  Sure, many people don’t actually ask themselves “should I do AI risk or climate?”, but maybe they should.  At this point I admit we are getting away from the original question a bit, since we’re no longer talking about what MIRI actually does.  My point is that altruistically motivated people with math skills should think about stuff like climate as an alternative to working for MIRI, and (see point #2) they probaly could do either.

That they aren’t aware of both alternatives is a fact about social networks and about the way neither of these projects is not especially well publicized.  (To the extent that they’re publicized, it isn’t in the same social networks; among other things I think there’s a literal west coast / east coast split here.)  It doesn’t really bear on the ethical point about which of these is a better thing to do.

(It feels a bit like someone saying that MIRI is worthless because, after all, “very few skilled people ever think about working for MIRI.”  Surely true, but that’s merely a fact about MIRI’s lack of publicity.  The question is, should people think about working there?)

chroniclesofrettek:

nostalgebraist:

chroniclesofrettek reblogged your post Quick note on the su3su2u1 / slatestar… and added:

Yep it’s possible to reason badly by taking your prior such that you get the result you want. 

Can you give an example of a place where you don’t have a prior? Someplace where you wouldn’t get involved in a futures market on the question, no matter what the price was? 

I think fundamental physics is the classic example here: it is very easy to say that we are more confident in the latest theories than in older ones, but impossible to say “just how confident” we are in the latest theories.

If I lived, say, (shortly) before the Michelson-Morley experiment, if you had asked me “how probable do you think it is that the Newtonian account of space, time, and kinematics is accurate (for all possible values of length, time, and speed)?”, I would had no idea how to answer.  Because I would have no reason whatsoever to believe the answer might be “no.”  And yet I would also know that not every possible experiment had been performed, that tomorrow we might find out that Newtonian kinematics is just a local approximation to a better kinematics (as in fact we did).

So what probability should I assign to that possibility?  What’s the probability that “when we explore regions of physical parameter space we haven’t yet probed, we’ll have to revise theories that have been very successful so far?”  0.01%?  1%?  25%?  33%?  50%?  Even posing the question this way seems bizarre: my feelings about such an issue are not naturally described by probabilities.  And I wouldn’t enter a prediction market on the question because to expect to do well in a prediction market you have to have some confidence in your probability assignments, and in this case I would have none.

So you would reject entering a prediction market for that at 99%, 1%, .000001% .0000000000000000001%? If getting it wrong meant you lost a dollar and getting it right netted you $10M?

Well, okay: we could play this revealed preference game and elicit probabilities from me for anything.  If that proves to you that I “have a prior,” then I always have a prior, yes.  Some of my responses might not obey the probability axioms, but you could chalk that up to me being an irrational human and instruct me to become more Bayesian.

However, if we go back to the hypothetical me who lives before the Michelson-Morley experiment, I think you’d find an odd pattern in the answers you’d get.  If H = “Newtonian mechanics works at all scales,” and you try to elicit P(A|H) for various A, I think you’d get pretty nice, coherent results.  Because I could actually (in some cases) write down the physics equations relevant to A and use them and some probability theory to compute P(A|H).  My answers would look like a real probability distribution – no conjunction fallacies or the like – because they’d been derived using actual math.

But what if you try to elicit P(A|~H)?  You’d get a mess.  Because I would really have no idea what “the world under ~H” looks like.  Trying to actually confront that question would be a mammoth task: I’d have to invent every one of the infinite (?) number of possible theories that limit to Newtonian kinematics at human scales, and find some way to weigh the likelihoods of these against one another, and work out what they predict about A, which in some cases might be mathematically intractable.  I wouldn’t just have to be Einstein and invent special relativity, I’d also have to invent doubly special relativity, and all sorts of possible but not-yet-invented-in-2014 variations on SR.  And consider theories that are like SR with various extra terms that aren’t relevant on human scales, and find a way to weigh those against each other and put a probability distribution on the coefficients in front of those extra terms (which can’t be uniform because they couldn’t be too big or they’d start being relevant at human scales again).  And could I use something like Occam’s Razor to start neglecting theories once they have sufficiently many annoying extra terms?  Nope, because if I ever wanted to do a Bayesian update based on this prior, that could totally screw up the posterior probabilities, even of the theories I do find likely (see this cool paper).  So on I go, trying to devise a system I can use to get predictions out of an infinity of successively more complicated theories (the majority of them intractible by known mathematical means), hoping all the while my answer won’t be catastrophically sensitive to the choice of the assumptions I’m putting on my priors for the ever-multiplying parameters in these theories … so that I can finally, having become the All-Physicist and invented all the kinematics that could ever be devised, give you a consistent, math-based estimate on the betting odds I’d give … 

Would anyone ever do this?  Of course not!  What anyone actually thinks when you ask for P(A|~H) is “oh god, who even knows.”  When pressed, they then submit their default betting odds in case of “oh god, who even knows” questions.  (I don’t know know quite what mine are, but I’d probably take the $1 vs $10M, at least.)  The results of this simple procedure, unfortunately, do not behave like a probability distribution.

What’s P(B|~H), where B and A are independent?  It’s P(“Oh god who even knows”).  What’s P((A and B)|~H)?  Well, it should be P(“Oh god who even knows”)^2 (independent events and all), but if you’d asked me about it before you’d asked me about A and B, I’d just give you P(“Oh god who even knows”).  I will never give you anything except P(“Oh god who even knows”), because what I have in my head is not a probability distribution, not even an approximation to one.  It’s total uncertainty.

The thing you are eliciting from me when you ask me these betting questions about ~H is a pile of probability-axiom-disobeying nonsense, adding up to nothing more than “I have no idea.”  And trying to update on that is not a foundation I want to base science on.

(via chroniclesofrettek)