Install Theme

jadedviol:

nostalgebraist:

napoleonchingon:

Really surprised by the lack of pushback to @parvumlyceum essays that @nostalgebraist highlighted. Like I would admit that they are a useful corrective to ideas that I (a person far outside of any contact with university English departments) stereotypically assume to be prevalent currently in English departments. But it seems to me clearly incorrect as an absolutist position on its own. Like, just to use the very first example from @parvumlyceum: does anyone actually think cheese-making skills are more helpful to understanding the cyclops chapter of the Odyssey than knowing about how the Greeks conceptualized the relationship between guests and hosts? Because I sure don’t.

I’d definitely be interested in reading pushback.  I got a lot of emotional satisfaction out of the strident, righteously-angry-prophet tone in conjunction with a type of critique I think is far too rarely made – to me it feels natural for those two to go together, for the people who smash idols to go about it with fury and perhaps not with the delicacy I usually value, if that makes sense.  But that writer does seem to have a positive vision of his own that I don’t know if I agree with, and I’m not sure I’d be happier in his preferred vision of the university, with its own idols.

(For one thing, his prescribed fix for “these disciplines don’t have a good enough sense of their own history” seems to be, implicitly, “just learn the history of everything ever and you’ll be good.”  He talks about academic specialization as a way of establishing which things you are allowed to be ignorant of, and admits that specialization has its uses and upsides – but the other essays are all about chastising certain fields for not having read the things they’ve decided are OK not to read.  I think he’s right that the “epistemic closure” in some fields may have become too tight, but then again time is finite and one does have to make decisions about what to read, and I don’t know if he has a real answer to that dilemma)

I’m interested to hear more about your skepticism toward literary studies on historicist grounds. Partly since I’m unsure how to take the posts you reblogged. I take their main beef not to be with “literary studies” in whole but with what they see as the corrupting monopoly of the university system maintains on literary studies and intellectual respectability and with the more abstract branches of literary critism (like semiotics and thematic analysis I guess?) that they believe to have dominated English literature departments.

My problem is that the sorts of things they call for are basically what I found in my English Lit classes. Like, sure we talked about the question of authority in the Wife of Bath’s tale, but we also talked about Middle English economics and the rise of tradespeople and the trend of using English as an artistic device (or just as a thing people wrote at all). We talked about the politics of the time and the Crusades and indulgences and the beginning hints of the Reformation. We talked about the Great Vowel Shift and very, very basic philology. We talked about what inns were like and what going on a pilgrimage was like and why people went on them. We talked about how when he describes the cook:
   That on his shin an open sore had he;
   For sweet blanc-mange, he made it with the best.
that a blancmange is this gelatinous white stuff that can look like the pus in an open wound.  And that’s all just the week or two spent on Chaucer.

I’m not sure what parvumlyceum would have wanted my prof to do. We only had maybe a couple weeks on Chaucer in literary overview meant to be taken by first semester freshman at a mid-tier school. I suppose we could have sacrificed some time on themes and genre and literary form. But honestly, all of my courses taught a bunch of stuff about the concrete stuff of history and culture and like, I dunno, the fact that Milton dictated most of Paradise Lost to his daughters because he was blind and patriarchal.

Between the two you posted and The English Department and the Ghost of Literature and Introduction, I take their main points to be:
1. Universities are garbage. They are a constant corrosive force for institutional power, racism, sexism, and anti-intellectualism.
2. Literary criticism’s history is one of institutional power, racism, etc.
3. Postmodern lit crit that emphasizes form, text, reference, etc. is also garbage and misses the heart of literary texts (which may or may not be a useful class of objects to discuss at all). The heart of literary texts (which may or may not and so on…) are the concrete conditions out of which they arose.
4. Modern English departments are marked by narrowing the scope of learning to a vanishingly small point of gibberish that fails to connect to any non-self-referential knowledge or skill.

I don’t have much to say about most of that, except to say that points 2 and 3 are argued over all the time in literary criticism. I mean, he gets these histories directly from scholarly work done within academia. People in literary studies are talking about this stuff all the time. Point 4 is interesting, but, as I describe above, not at all my experience. Maybe I was lucky. Maybe I’m blocking out the bad parts. Maybe I’m brainwashed. There was a lot of utter rubbish said about the power of story and words in my English department, so it’s possible. But from my experience at a little Midwest school no one outside the region has heard of, studying English does help you learn stuff.

OK, the below is all me (i.e. not meant to be what I think parvumlycaeum was saying):

Keep reading

(via jadedviol)

@shellcollector

You’re still not comparing like with like here, though; the comparison is surely how long it takes a human infant to get to the point where it understands the game. The image to bear in mind isn’t a new gamer trying out an Atari game, it’s a baby repeatedly hitting itself in the face as it learns to control its own arms. And when humans start to learn abstraction, they do it by making (often hilarious and endearing) mistakes, over and over again, until they get there. Now, I think there might still be a case to be made that even babies are better at generalising the things they’ve learned than neural nets; but it would require a lot more evidence from developmental literature etc.

I think I agree with you, and I think this is an important observation, but the whether it’s is “good news” for neural nets depends on the question we’re asking about them.

On the one hand, humans do seem to need lots of “training data” (and human intelligence is thought to be related to our long childhood relative to similar apes).  Thus it might be the case that we have no innate “special learning tricks,” and that our initial fumbles around are (or might as well be) simple gradient descent in the parameter space of some relatively unstructured, general model.

On the other hand, whatever “human-like intelligence” is, it’s precisely not that – the long childhood is getting us somewhere beyond just caching algorithms for a bunch of specific tasks.  Eventually, we are able to make quick leaps based on abstract hypotheses – “recognizing the mechanics” – in a way a baby would not, even if both of us are equally new to the domain.  It’s conceivable that even this is more gradient descent, just taking place in a set of “good” parameter spaces we’ve learned, where the representation of “the abstract generalization of X” is a short distance away from the representation of “X” – but in any event, we have those fancy tricks, and reap the benefits.

So if we define “learning” as “learning in the way infants learn from sense data,” then it might indeed be fair to say that the machines are doing something like “learning.”  And this seems promising, in that we certainly manage to do well by starting out with that sort of learning.  But the colloquial sense of “learning" is the fancy abstract thing we think of in relation to human adults, and so far machines are not doing that thing.

(via shellcollector)

vaniver:

nostalgebraist:

It seems noteworthy to me that most of the impressive machine learning successes we hear about are offline learning.  In other words, the impressive task performance – recognizing dogs, winning Go, whatever – consists in showing off an already-learned skill.

If it’s a neural net, it’s just running with fixed connection weights when it does that thing.  The connection weights were set earlier by the researchers.  The reason we say things like “the neural net learned to do this” is that the researchers set the weights in a certain way – i.e. rather than figuring out good values by themselves (analogous to writing a traditional program), they ran an algorithm to tweak the weights over and over in response to lots of example inputs.

Usually, though, when we marvel at these things, we don’t seem to care much about the details of that process except that it happened.  We see a computer program that does something, and we marvel that fixed, capable computer program exists even though no one specifically designed all the details.  You could say we’re marveling at the “learnability” of this program (or level of performance).

“Impressive things are learnable these days” isn’t really the same as “computers are good at learning these days.”  The latter idiom tends to make us imagine computers “picking up new skills” in a brisk, human-like way.  But if we were to actually look at how the program behaved during the training period, its learning process might seem very slow and even “stupid” – making the same mistakes over and over again in thousands of trials.

For instance, DeepMind has a paper on a neural net architecture that can equal or surpass human performance at Atari video games.  However, human performance is defined as “performance after playing the game for 2 hours:

The human performance is the average reward achieved from around 20 episodes of each game lasting a maximum of 5 min each, following around 2 h of practice playing each game.

while the machine has practiced each game for a lot longer:

We trained for a total of 50 million frames (that is, around 38 days of game experience in total) and used a replay memory of 1 million most recent frames.

That is, what would be 38 days of playing the game from the human’s perspective – that many successive rounds of a simple Atari game.  If you imagine something watching over the computer’s shoulder as it played an Atari game for 38 days, it wouldn’t look like a very good learner.  We’d see it run through the same scenarios over and over again for hours, the changes in its behavior briefly perceptible.  We’d have a party to celebrate when, on day 5, it finally learns to turn so it doesn’t fall into a hole, something it has done millions of times.  Well, specifically, if the hole is on its left – it doesn’t learn that it’s exactly the same to its right until day 7.  We stock up on champagne in preparation for the day when it figures out that it can shoot at monsters… . 

On the other hand, it isn’t insignificant that these game-playing programs were learnable.  We could tell the machine we want a program to play these games, and run some sort of search for programs fitting that description, and find one that’s as good as a human – and do that in a reasonable amount of time with current hardware.

The “search for programs” part is the “learning,” but it’s not like we’re searching in any sort of fancy smart way, just starting with something random and very slowly exploring the vicinity (gradient descent).  Hence the excruciatingly “dumb” learning performance envisioned two paragraphs ago.  The real progress comes in defining the type of program we’re searching through – if we define a particularly “good” type of program and only search through those, we’re more likely to stumble upon something good, even if we’re searching in the same clumsy way.

The success of deep neural nets (at certain things) would then mean that “a deep neural net” is a particularly good, clean way of encoding the sort of program that does those things.  You could think of it this way: we’ve found a language where certain useful things are very easy to say, and so even if we don’t really understand the language, we can still manage say the useful things by just trying out different things and seeing what works.  With neural nets, we have this language now, and we’re using computers to try out the different sentences we can say in it, and we keep coming upon sentences that do cool things.

This should all be distinguished from the topic of how well the cool things can be learned.  Having the right language is important, but it’s only half of the picture.  We know a good language for representing Atari skillz on a computer, but not how to get the computer to have them without playing the game for 38 days.

I think this is underselling how interesting this is, but underestimating how much humans practice.

That is, the right thing to think about is not a person picking up a game they’ve never seen before, but a 5-year old playing their first video game. And this undersells how much transfer the 5-year old has; if you’re playing Mario, a kid with a body knows what it’s like to run and jump. (I had put in about 38 days of practice playing video games by the time I was 10, probably? 12 for sure.)

For example, someone (gwern?) estimated how much Go Lee Sedol had played / seen over his lifetime, and the number of games was actually comparable to the number of games that AlphaGo had played/seen when they had their match. On the one hand, not all that interesting–we’ve made something that can pick up and store Go knowledge at roughly the same rate as a human–but the interesting thing is that the rates are the same per game which is not per hour. Lee was about 30 years old and AlphaGo was about 2; similarly, if there’s a call center deep learning network that figures out how to navigate the trees / deal with people as well as a human, it might need subjective years (or even decades) of experience, but it can probably get those years in less than an objective month. And then we have a call-center bot that owns that economic niche.

I agree it’ll be even more impressive when we have a call-center bot that can then more easily become a docbot than a fresh network becomes a docbot, but I don’t think that’s all that far out.

@anosognosic made a similar argument, to which I said this (in tumblr replies):

@anosognosic: yeah that is definitely a consideration.  the neural nets have to learn how to “see” anew with each game, learn anew that things near the character sprite are especially important, etc.  but i think humans can still learn pretty quick in games that violate some of the standard assumptions, which wouldn’t make sense if we just degraded to gradient descent (very slow tweaks over very many iterations) when we couldn’t rely on an assumption.

particularly, even in strange/unfamiliar games, you can see humans having regular “oh, X does Y” realizations on the scale of a few rounds of gameplay

Admittedly, it is hard to argue about this without having any hard data to compare.  Without that hard data, we’re just relying on dueling impressions and intuitions.  But that goes back to the other point I was making in the OP, that we don’t treat the learning trajectory as the relevant part – I’d love to see detailed comparisons between human and AI learning trajectories in the literature, but currently that’s not a topic of focus.  The focus is on “where you can end up” (learnability) rather than how you get there.  (If there is a literature on this I’m not aware of, let me know!)  

The DeepMind Atari paper does have a supplemental video showing snapshots from its performance on a specific game (Breakout) at different points in the learning process.  (As far as I can tell, an “episode” means playing the game until you lose, which can be pretty quick.)  But the time resolution is pretty coarse and it’s hard for me to know what to make of it.

My broad, intuitive sense of these things is that human learning looks a lot like this gradient descent machine learning for relatively “low-level” or “sensorimotor” tasks, but not for abstract concepts.  That is, when I’m playing a game like one of those Atari games, I will indeed improve very slowly over many many tries as I simply pick up the “motor skills” associated with the game, even if I understand the mechanics perfectly; in Breakout, say, I’d instantly see that I’m supposed to get my paddle under the ball when it comes down, but I would only gradually learn to make that happen.

The learning of higher-level “game mechanics,” however, is much more sudden: if there’s a mechanic that doesn’t require dexterity to exploit, I’ll instantly start exploiting it a whole lot the moment I notice it, even within a single round of a game.  (I’m thinking about things like “realizing you can open treasure chests by pressing a certain button in front of them”; after opening my first chest, I don’t need to follow some gradual gradient-descent trajectory to immediately start seeking out and opening all other chests.  Likewise, the abstract mechanics of Breakout are almost instantly clear to me, and my quick learning of the mechanical structure is merely obscured by the fact that I have to learn new motor skills to exploit it.)

But admittedly, without the hard data it’s tough for me to claim that these algorithms don’t exhibit this “fast learning of mechanics.”  (All I can say is that the performance metrics in these things tend to grow steadily without sudden jumps, which seems suggestive.)

(via vaniver)

tadrinth:

nostalgebraist:

nostalgebraist:

Returning to my graduate thesis (for what is hopefully a brief round of polishing before the whole thing’s over) has put into stark relief how unpleasant grad school was.

Suddenly I’m back to the old (half-forgotten!) daily pattern of accomplishing a few minor things in the first few hours of the day, then spending the rest of the day in an anxiety loop where I plan to do a more substantial task and then worry about how hard it’ll be given how anxious I already am, which makes me more anxious, etc.  Fiddling with code and running endless tests becomes a way to avoid thinking about the bigger picture, because thinking about the bigger picture feels like watching some absurdist play about how there are no longer any clear standards of value in the world.  My desire for alcohol in the evening has increased, just as it immediately decreased when I stopped working on this project earlier in the year.

I hope this isn’t just “what confronting something serious and difficult” is like for me.  But I’ve had other challenges that don’t make me feel like this; the first time I remember this distinct thing was when I was a research assistant after college, which – hmmm – was the first time I did “real” academic research (as opposed to my undergrad thesis, where I just did a project I thought was cool and didn’t expect to publish any results or otherwise interact with academia).  So probably(/hopefully) I just hate doing academic research.

There are probably a number of distinct reasons why I hate doing academic research, and I’ve obsessed over one possibility or another from time to time, but here is one that crystalized for me after writing the OP:

In academia, it’s not just that the value of what you are doing is uncertain.  It is uncertain, because you’re doing research and by definition no one knows for sure what the outcome will be.  But that in itself I might be able to live with: negative results are still results, and there’s nothing shameful about saying “this seemed like it might work; we tried it; it didn’t, for these reasons; now you know.”

It’s not just that the value is uncertain, but that you’re working alongside many people – many of them incredibly smart and successful – doing things of likewise uncertain valueand that, in merely doing your research, you are implicitly claiming that your inherently-uncertain project is a relatively good bet among all conceivable such projects, or at least all the ones currently in play.  You are implicitly saying that the direction you have chosen is a shrewd direction, not just from your personal perspective, but from the perspective of the entire vast many-tentacled apparatus of Science*.

The boundaries of “what your field knows” are wide, twisting coastlines you can’t honestly claim to know in full, and at every point on this coast, just offshore, there are innumerable new experiments that could be done, new directions that could be tried.  Only a finite number of them will be tried (in the next year, say), and in many cases – when applying for grants, say, or competing for publication in a journal – you are fighting in a zero-sum competition to make your direction one of those few.  Even when the finite pie isn’t as visible – when giving talks, say – you’re still presenting your little in-progress expedition not just as one interesting on its own terms, but one (as it were) advantageous to the whole nation.

In actuality, you only have a vague mental map of the island nation and its coast, with most of the detail concentrated on the little area you call home, and the real reason you’re setting out from the local port is because it’s local and you know and like the waters.  But you have no option except to play ball with the whole national community of explorers.  You may not especially care about Admiral Bigwig’s exploits off the other end of the coast, but like it or not, Admiral Bigwig’s contributions to the national interest will be ranked against yours.  You will be asked why Her Majesty should send resources to your backwater, specifically, of all the ports on the island, and suddenly every port on the island is your business.

Supposedly it’s possible to avoid this by specializing in a small enough subfield, but I’ve never actually had that experience.  After all, people in different subfields can still effectively do the same things, and often they do.  If you’re in a small enough pond, you then have to worry that the size of that pond might shrink to zero as it is revealed to be just a shallow murky subset of something bigger and cooler.

Research is inherently combative, “political,” involved with everyone else’s business.


This kind of thing isn’t unique to academia, and indeed it may not be possible to avoid it entirely.  Job applications, say, are a bit like this (although “contributing to a company” is easier to get a mental handle on than “contributing to a scientific field”).

I get the impression that the startup world is a lot like this, which means I should be very careful about getting involved in it.  In the broadest terms, I don’t want to be in an organization where everyone’s working their asses off but no one quite knows what “the product” is or why it’s any good.

*(focusing on science here just because that is the kind of academic work I have experience in)

Competition for venture capital is a bit like this, but more from the perspective of the VCs; I’ve worked at two software-as-a-service startups.  At both, all of the employees knew exactly what the product was and why it was any good: the software did roughly the same thing as a competitor’s product, except that theirs was built twenty years ago and ours was built today.  There’s been so much improvement in software infrastructure that you can blow big, slow-moving companies away by rebuilding their product, because the modern infrastructure lets you either charge less or add features faster.   

That’s not so much saying that every startup is correct about what the product is or why it’s any good, but having a very good narrative for those two things is a big factor in attracting venture capital, so most startups will have a good story even if it isn’t true.  

That issue does affect founders, so I wouldn’t start your own company, but it doesn’t sound like something that would be an issue when joining a start up.

I pursued a PhD but instead left with a Masters; switching careers to programming has done wonders for my emotional health and income.  

This is interesting and helpful, thanks.

I guess I’m confused about the point at which “the issue affects founders, but not new hires” kicks in.  The comparison to academia comes to mind when I think of the stories I’ve heard about startups in their chaotic early days (with very few employees, etc).  It seems to me (is this wrong?) that if your company is five people in one room frantically talking about how to make money, potential pivots, what narrative to spin for VCs in the next round, and so forth, it doesn’t really matter whether only four of the people were there at the beginning (”founders”) and the other one came aboard later.

Of course not all startups are in this phase, but I’m not sure what a safe proxy for the transition is, besides “no longer being thought of as a startup,” which is what I had in mind when writing the previous post.  When I think about “applying to work at startups” I think about a lot of places I’ve looked at that aren’t selling anything yet and/or have a team of ~5 people, where I wouldn’t feeling comfortable saying “oh, none of those anxieties would reach me, since I’m not a founder.”  Any thoughts?

(via tadrinth)

shedoesnotcomprehend:

nostalgebraist:

I know that all the evidence about crop circles indicates that they are made by people (they’re more common in areas with higher population and easily accessible areas, they started appearing much more often after the public got interested in them, those two guys revealed that they made a bunch of them and explained their process, etc.)

But what I want to know is how random people manage to pull it off

Like, I know there are professionals who make them for commercial clients, etc., and they have a lot of resources.  But if you’re just some (drunk?) friends making a crop circle for fun one night, how do you make it look good (from above)?  Do you have one person on a really high ladder giving orders?  Are there are a bunch of “shitty” crop circles out there, made by people who don’t know what they’re doing?  And how do the good techniques spread from place to place, given that there don’t seem to be in-depth public resources about it?  Is there an underground network of crop circlers out there, swapping tips?  Has any journalist gone looking for it?

I was under the impression that they were pretty straightforward to make? Shove a peg into the ground, tie a rope to it, use that as your radius, trample down corn within that circle, bam.

This of course won’t get you the way fancy ones with complex fractal patterns, which I assume is what those professionals are up to.

(And yes, I would assume there’s plenty of shitty crop circles, which don’t make the news because “some of my crops were trampled down in a vaguely circular patch” isn’t really all that newsworthy.)

Ah, yeah – a number of people have said things like “they don’t seem that hard to make,” and I think I’m conflating crop circles in general with “the fancy ones,” which are the ones I always see pictures of in the media

There are so many “fancy” crop circles out there that I guess I figured some of them were of unknown origin?  For instance this site has gallery of really impressive crop circles from every calendar year since 1994, and talks about them as though the artists are unknown (although that may just be willful blindness, or playing up the mystique to get people to buy their books).  OTOH there are plenty of circles out there that don’t look as difficult to make (I feel like that site answers my question about “shitty” crop circles in the affirmative, e.g. check out #22).

(via condensed-theorem-shop)

dagny-hashtaggart:

nostalgebraist:

I have this strong feeling that the current American cultural institution called “college” is a very messed-up thing, and one of those things that people in the future will read about in history textbooks and think “WTF, they really did that?”

Generic accreditation process for white collar jobs; bizarre industry that jacks costs up and up and can still get people to go into exorbitant debt just to buy their products; social stratifier with fixed, ancient labels that get assigned based on a mixture of economic background and how impressive you could look on paper at age 18 (the different specific colleges and their gradations of rank reminds me of the “he’s from a very good family” stuff about social class in 19th century novels); widely shared coming-of-age experience mythologized as an paradisiacal hiatus between adolescence and responsible adulthood; system for overseeing the production of academic research; oh and, I almost forgot, also designated place where people who want or need advanced non-vocational education can get it?

What a mess

I think college is an institution with a deeply arbitrary set of characteristics, but I’m not sure to what extent that really picks it out from other social institutions. We hear a lot about how modern American clusters of political affiliation are essentially the product of historical accident, and I think that’s largely true*. From a certain perspective, most of our social institutions are pretty damned arbitrary. What I think is more important (and I think this is one of the things Rob had in mind, but I want a bit more categorical clarity here) is whether it’s dysfunctional. I suspect in some ways it kind of has to be by serving too many masters, but I’d emphasize the functional perspective in our evaluations of the modern university system in any case.

*(Perhaps even more true than many of its exponents believe, since many of them are libertarians praising their own political ideas by contrast, while not realizing that libertarianism in a broader sense is no less a coalition of groups with very different substantive interests that happen, for the time being, to result in the same policy preferences.)

I think some parts of it really are (somewhat) functional, even for functions that aren’t explicitly written on the tin – for instance, it does form its own distinctive social class system, but as @thefutureoneandall mentioned, it is one that isn’t solely based on family wealth and ties, and thus may be better than what it would leave behind if it disappeared.

What got me on this line of thought in the first place was thinking about the common idea of “a college education” as this magical thing that transforms people in a profound, definable way, makes them qualitatively more aware, capable, something.  It is true that many people learn a lot in college, but I think that varies widely, not just by school but (perhaps moreso) by whether people came wanting to learn and prepared for it.  Even at the best academic schools, a lot of what you get out depends on what you put in.  And a lot of people do go through other parallel kinds of growth in college, but I think this may just be what would happen anyway at that age and/or when people stop living with their parents.  When we say someone has “gotten a college education” it is really very unclear, in the absence of other information, what that entails.

And I was thinking that this mythology of “the college education” is maybe something we maintain in order to rationalize some of the things I mentioned – the way it costs so much, the way it’s a blanket accreditation process for so many things, the way we expect anyone who can do it to do it.  If it really was a magical transformation, that would all make sense.

But if you look at all of the individual aspects, it’s not clear that it is doing any of them especially well, compared to the conceivable alternatives.  It’s possible that the whole is more than the sum of its parts – say, that it makes sense to have a sort of quasi-formal re-stratification process around the onset of adulthood, and that it makes sense for it to be centered around education, and that once this is in place it’s a decent way to sum up people’s merit relative to the (abysmal) common alternatives.  But even in that picture, some of the aspects don’t fit – say, the associations with research (really dedicated researchers often to see teaching as a burden and delegate a lot of it to grad students, benefitting no one except arguably/sometimes the grad students), and with sports (most college sports teams are money sinks, not sources).  I guess the “small liberal arts college” is a niche that exists to minimize those two problems, so maybe they come out looking pretty good here, although the other issues (high cost, limited age bracket, the “generic credential of intellectual/social merit awarded for highly varied activities” thing.)

(via dagny-hashtaggart)

What Kind of Man Spends Millions to Elect Ted Cruz? →

gattsuru:

nostalgebraist:

nostalgebraist:

BTW, I had only heard of Cambridge Analytica because I read this article yesterday, about weird, powerful conservative donor Robert Mercer

It is really interesting and worth a read

Seriously though this article

Apparently everything from Ted Cruz to Breitbart to the Cato Institute to gold standard advocates is being puppet-mastered by this eerily quiet evil genius with his fucking owl-themed mansion and a 2.7 million dollar model train set in his basement

What even is reality

[Disclaimer: I have a personal grudge against Bloomberg’s editorial stance.]

It’s… a bit less interesting than the author’s trying to make it.  Few of the factual claims are false, but the stuff it skips over changes many things and the interpretations range from credulous to nonsensical.  Ted Cruz ended up publicly blackballed after the NonEndorsement Speech, and to the extent he received support at all in 2012 it was specifically as a result of attempts to unseat Dewhurst (largely over tax law).  The goldbugs have been around for a long time, and they’ve not suddenly changed tune on much.  It’s not clear how much Mercer or the family are puppeteering or even seriously donating to CATO: he doesn’t show up on the lists of major donors older articles have assembled, usually a sign of smaller donations, and he doesn’t agree with a lot of their politics on top of that.  A lot of Mercer’s policies haven’t trickled very far : he’s been one of the ‘immigration reform’ people on the Republican side particularly for DREAM and the guest worker programs, even as the Tea Party and Trump folk have rather heavily resisted them.

It’s also not helped by the author overreaching.  Orient, for example, is quite the kook.  But the audience she’s writing to will read “on the other side” as something very drastically different from claiming the government “have taken part” in a spree shooting.

((Fans of the Vorkosigan Saga will find the name of the conference kinda funny, though, given that Jackson’s Whole is an ultra-capitalist hellscape in that series.  Likely both were just inspired by the result of attempts to charm Volcker in the 1980s, though.))

I mean, Mercer’s still a bag of dicks, and his astroturf ranges from schlocky to offensively bad.  He’s a pretty textbook social conservative, funding people to promote or win textbook social conservative positions, with all the bad advice that involves.  The only particularly interesting part is that he spent on Trump where most conservative funders thought it a lost cause: 

And… uh, if you aren’t a hypocrite, you don’t have standards,, but I can’t overlook the irony of this coming from the pages of Bloomberg.  His vanity funding of oddball politicians is notorious among conservative communities, to the point where “Mayors Against Illegal Guns” group is a byword for someone about to be arrested for a violent crime.  He’s spent more money on donations for soda taxes than the entire confused chart of Mercer influence, and that’s not even adding in the favorable Bloomberg coverage or Bloomberg-funded experts on other media.  A moderately skilled writer could just as easily paint him as Lex Luthor to Mercer’s Evil Gomez Addams.

That’s a perfectly acceptable thing – in addition to supporting their first amendment rights to be idiots, I’d rather they waste money trying to persuade people poorly than more effective types of manipulation – but the spooky tones and conspiracy theory vibes don’t make for a good look when you’re doing the exact same, or taking employment from someone who does.

Ah, this is really interesting context, thanks.

And … yeah, wow, soda taxes?  Out of all the issues to spend $18 million on …

I’m still wondering why Mercer supports all these “kooks,” though.  Is he just trying to move the overton window?  Is it that what he’s spending on them is spare change to him, and he finds it fun?  (It sounds like so far he’s spent less on Robinson, the weirdo chemist, than on his model train set – although he did sue the train set company for overcharging him.)

The other thing that interested me in the article was the sense that a deeply respected insider even beyond the money – e.g. his wife did convince the Cato guy to change his decision, and if that’s because the guy values his input rather than for the money, that’s not necessarily less noteworthy.  And the parties that “have become legendary in Republican circles,” etc.  (Also, there was another article – I can’t find it now, there seem to be a lot of similar articles about this guy out there – which said he was a sort of “donor’s donor” whose support could convince other donors that a candidate was serious.)

Of course it’s also possible that he’s just an eccentric who people indulge because they like getting his money.  But they could also buy into the whole “mysterious math genius who made billions by being smart” mystique.  (He sounds a lot like an Ayn Rand hero – the technical advance whose potential is squandered by “the powers that be,” the whole “fire the linguists” thing)

(via gattsuru)

youzicha:

nostalgebraist:

youzicha

replied to your post

“bartlebyshop replied to your post “Question: are there any tests of…”

I mean, we *do* say e.g. “yes, he is very fit”. This is only informative because there is a positive correlation between many/all kinds of fitness. Doesn’t this mean that we already use the concept of a “general fitness factor”, similar to how we talk about intelligence?

Yeah, that is definitely true – but I think the way in which we talk and think about these two “general factors” is still very different.

In the fitness case, we see statements like "yes, he is very fit" as casual shorthands, and when we think of what it means to have a scientific understanding of fitness, we think of finer- and finer-grained breakdowns of the different dimensions and systems that are bundled together in the world “fit.”  For example, “he’s aerobically fit” is more multi-dimensional and more “sciencey” than just “he’s fit,” and if you want to get really sciencey, you break that down into even more dimensions by talking about, say, his VO2 max and vVO2max.

In the case of intelligence, we are told that the “sciencey” way to think about things is to focus on the coarse-grained, one-dimensional thing.  Books are written about how, even though the general public doesn’t but much stock in the one-dimensional thing, scientists think it’s really important and well-founded.  If exercise physiology were like intelligence research, it’d be all about one-dimensional fitness and what it correlates with, how it varies between groups and over one’s lifetime, etc.  (Try paging through some paper titles from the scientific journal Intelligence and mentally replacing “intelligence” and the like with “fitness.”  A strange scientific field from an alternate universe!)

I guess my impression of what “we are told”, and generally where the fault lines are in this debate, is a bit different from yours.

I would say IQ fans are generally interested in the details of how intelligence arises. Earlier today someone linked to a paper that claims they can decompose g into separate ‘reasoning’, ‘short term memory’, and ‘verbal’ components, but I’d expect everyone to agree that, if that holds up, it’s a more “sciencey” model than the one-dimensional one. Similarly, some of the early intelligence researchers tried to correlate IQ with nerve transmission speed, which shows that they were trying to isolate the causal mechanisms behind the statistical phenomenon. And finally, when I see people blog about IQ, they also seem interested in e.g. comparing verbal and quantitative SAT scores for different populations; i.e. even when they talk about descriptive statistics they seem interested in more than one dimension.

At the same time I think it is also possible to find situations where a one-dimensional measure of fitness is seen as “more sciencey”. It’s certainly possible to find papers with “fitness” in the title, but what comes to mind for me is epidemiological studies about BMI. There are lots of studies about obesity epidemics, comparisons of BMI distributions of different countries and how it correlates with various diseases, how that should influence health advice, and so on. It’s clear to everyone that a single number is not a perfect measure of anything, but the style is very “scientific”.

Instead, I think the key disputed part seems to be whether to use descriptive statistics at all. For example, the Amazon book blurb you link to states,

Just mention IQ testing in polite company, and you’ll sternly be informed that IQ tests don’t measure anything “real” and only reflect how good you are at doing IQ tests; that they ignore important traits like “emotional intelligence” and “multiple intelligences”; and that those who are interested in IQ testing must be elitists, or maybe something more sinister.

Yet the scientific evidence is clear: IQ tests are extraordinarily useful. IQ scores are related to a huge variety of important life outcomes like educational success, income, and even life expectancy, and biological studies have shown they are genetically influenced and linked to measures of the brain.

I haven’t read the book, but from the description, I would expect he is not arguing against things like the Hampshire et al. 3-factor model, but against people who claim this sort of thing cannot be quantitatively measured at all.

Similarly, there is a split about whether BMI guidelines are useful advice (“look at these correlations”), or despicable body-shaming (“one can be healthy at any weight”).

I’m in a bit of a rush but I thought about this stuff on the bus, so I will write something quick and I hope it makes sense –

Keep reading

(via youzicha)

napoleonchingon:

nostalgebraist:

jack-rustier:

raginrayguns:

nostalgebraist:

Speaking of grad school, this blog post resonates with me a lot, and I’m glad I found it when I did, because it cleared up some aspects of my experience that I’d been confused by

Grad school is about becoming an academic.  It’s about joining the academic culture.  Learning and discovery and knowledge and discussion are all involved, but in the same way that physical fitness is involved in Marine Corps boot camp.  That is to say, intensely, but if you go to boot camp just looking for a workout routine, you’re gonna have a bad time.

From linked post: “Graduate school is not education. It is socialization. It is about learning to behave, about mastering a rhetorical and discursive etiquette as mind-blowingly arcane as table manners at a state dinner in 19th Century Western Europe.”

Humanities/social sciences are areas that I can see getting very political, but what about hard-STEM graduate school? It’s probably not immune, but would it be less the case that it’s soul-sucking?

It really depends on the field and sub-field and school, I think.  But there are always going to be conflicts over essentially subjective things, no matter how “hard” the field is, because research inherently involves asking questions that no one knows the answers to yet.  This presents a constant explore/exploit dilemma, and people can argue endlessly about whether certain lines of investigation are worth following further or not.  And scientists tend to be defensive of their favorite lines of investigation, in part for reasons related to personal branding and grants etc.  Plus, even in areas that have direct practical applications, there’s always a time delay between academic research and implementation (in industry or w/e), and plenty of room to argue about what is and isn’t “useful” before anyone really knows yet.

(I think there may even be special frustrations here unique to STEM, in that STEM can allow people to produce demonstrations that look superficially “conclusive,” even when they aren’t.  Someone provides hard numbers showing that X is more predictive than Y … but what assumptions went into those numbers?  Someone mathematically proves that A can’t handle cases of type B … but how much does that really matter?  And so on)

Yes, but these dilemmas actually arise outside of science academia as well, except outside of academia the answer to them is *almost always* shut up and do this cause I said so. If inside of academia it feels weird that it sometimes still is “shut up and do this”, it’s because academics have internalized being able to defend their position as an important value.

Or maybe that’s not a good way of putting it, because that depends on your work environment, etc. I would say, though, that outside of science academia, if your boss tells you “follow this line of investigation” and you think it’s a bad idea, you actually have way less recourse than you do in science academia.

(I wrote a thing about this once)

I think we’ve had very different experiences in science academia; the kind of environment you describe in the linked post is not at all like what I experienced.  (It does sound like what I was expecting out of academia, and didn’t get.)  I guess it really varies.  Not sure what generalizations we can actually make; we’re both largely reasoning on the basis of our own personal experiences, it seems.

You’re right that the dilemmas arise everywhere, but I want to point out another difference that weighs in the opposite direction: in science there is always a team lead who is a technical person.  That is, your lab (or your theory group) has a PI, and the PI tends to be the person with the most technical expertise and the manager who gets the final say.  This means that arbitrarily fine-grained technical issues can be dictated on a “because I say so” basis, and the PI’s technical expertise means that their seemingly arbitrary dictates can’t be dismissed as intellectually irrelevant even if no one else understands them.

By contrast, in other areas you may have, say, an engineering team in which there is relatively free discussion, and then the boss who gets to say “because I say so” but simply does not have opinions about a lot of the technical distinctions (and is probably unaware that they exist).  The same problem can still happen if there’s a tyrannical lead engineer or something, or if the boss does have some fine-grained preferences (“we have a contract with this company to use their products,” etc.), but it’s not built into the system.  More specifically, the “manager == resident technical expert” thing isn’t built into the system, so it’s possible to put the arbitrary dictates in one mental box and the actual issues in another separate one, rather than seeing the dictates as technical wisdom beyond your ken.

(via sungodsevenoclock)

@voximperatoris

On the other hand, I think your concept of “competition” is flawed. The competition praised by Rand and most other libertarians isn’t the neoclassical “pure and perfect competition” of, like, millions of identical wheat farmers where there is no product differentiation and they “compete” only in the sense of all changing prices instantly to reflect market conditions.

It’s the idea of freedom to compete, that there is no legal barrier to entering the market, if you’ve got something better to sell. But there’s nothing wrong, in this view, with there being only one seller, if he got there because he has the best product. Even such a seller is constrained bypotential competition and therefore can’t merely start “pricing according to marginal revenue”, which is in the neoclassical view the problem with monopolies.

And it’s competition in a meaningful sense of, you know, entrepreneurs coming up with innovations and new ways to differentiate their products from the rest (which doesn’t happen in “perfect competition”).

The Objectivist economist George Reisman has a good essay, “Platonic Competition”, on this subject (though I don’t agree with everything he’s ever said).

As for weapons and stuff, Rand never had her heroes making weapons. It’s the villains in Atlas Shrugged who work on Project X, which is some kind of futuristic weapon of mass destruction. She intentionally based the “tragic villain” scientist, Robert Stadler, after Oppenheimer. (And the “pure evil” one who stands as a foil to him after B.F. Skinner.)

Of course, Rand wouldn’t condemn the idea of making weapons as such (for instance, to be used for self-defense), but she would condemn the idea of lusting for power and using those weapons in aggression against others. “Imposing your will on others” and “quashing other people’s ambitions” are sort of package deals, anyway. There’s nothing wrong with “imposing your will” to have a free society and quashing the ambitions of tyrants, after all…

Ah, I wasn’t thinking of “perfect competition,” I was thinking of the “freedom to compete” that you’re talking about – the arguments I’m used to hearing are things like, “if one company sells a bad product then other companies will pop up and steal their customers by selling better ones; if one company has bad customer service there’s an opportunity to profit by doing the same thing with better customer service; and so on for everything.”

Where I differ from these people (or one of the ways) is that I think there is often a lack of practical freedom to compete even where there is legal freedom to compete – for instance due to lock-in / network effects (“this new Facebook alternative is so much better than Facebook; if only anyone I knew was on there”), or because existing big players can spare the resources to squash any potential competition before it gets big enough to be a threat.

(Sidenote: granted, such squashing is often done via some interaction with the government or the law, but I think that is because the government is currently the group you look to if you want certain kinds of power exerted on others, monopoly on force and all that; in some an-cap paradise they’d just pay whichever groups had the equivalent power)

So there can be practical unfreedom-to-compete even when there is legal freedom, and it can go the other way, too.  Violating the law isn’t physically impossible, and often isn’t even practically impossible; lots of money gets made every day by deliberately violating the law.  So it’s all practical unfreedom, in a sense: both private and public entities will try to stop you from entering the market, and sometimes they will succeed.

I imagine the libertarian answer here is that I am making this distinction on an abstract semantic level, ignoring probabilities and magnitudes for mere technical possibilities, and that in fact (they would say) government is the main offender by far and the only available directions of improvement involve restraining the government.  But then it is still strange for libertarian fiction to focus on cases where people rise to great practical (if not legal) power, since this seems like precisely the sort of thing the previous sentence is brushing aside as negligible.

(It makes sense to me that there would be tension between libertarians and Randians on this score, and maybe this all just boils down to that tension, which may exist inside the heads of individual writers, even)

(via voxette-vk)