Install Theme

The Craft & The Community - A Post-Mortem & Resurrection →

nostalgebraist:

oligopsoneia:

nostalgebraist:

This post is gigantic (I admit I only skimmed it), contains wild and massive claims made with little to no firsthand experience, could stand to have the personal elements more clearly delineated from the ostensibly objective-descriptive elements, and I don’t know if even half of it would withstand any kind of close inspection –

– but, once again, I’m very pleasantly surprised that the rationalist community is talking openly about why the “go out and change the world” thing did not happen, rather than sweeping it under the rug

(Now going beyond mere talk, there’s the part I’m not yet optimistic about)

There’s an assumption buried right near the beginning that Taking Ideas Seriously means “the Rationality Community” should be a center of agency, rather than a social scene that people hang out in sometimes because it’s fun (while fulfilling positive moral obligations and the like elsewhere.) This is just flabbergasting to me. 

I think that assumption makes sense because the post is springboarding off the Craft and the Community sequence, in which Eliezer dreams (in a way he admits is idealistic) about IRL rationalist groups that would serve as world-improving “taskforces,”  and/or develop rationality through experiment and make possible much higher magnitudes of rationality, and/or “identify as elements of the Common Project of human progress, the Neo-Enlightenment,” etc.

It’s all quite vaguely imagined and could be interpreted in various ways, but there was clearly a dream there that rationalists might start getting together IRL and doing big, fundamentally new things made possible specifically by being rationalists together in communities.

Clearly, this has not happened.  My opinion (and possibly yours?) is that it didn’t happen because there LW-rationalism wasn’t actually a recipe for doing big new things, or even a recipe for such a recipe.  The author of the linked post thinks it could have happened and only failed for a bunch of historically contingent reasons.  I don’t agree, but saying “it failed, but maybe we could try again?” is much preferable to not talking at all about the failure.

To riff a little more (for old times’ sake, haven’t used the #big yud tag in forever):

Why do I think LW-rationalism wasn’t a recipe for doing big new things?  Because LW-rationalism focused on correcting individual judgment, but to accomplish big new things you need organizations of people, and the biggest obstacles to improving the world are organizational in nature.  Getting big things done is not a matter of taking a bunch of individually optimized humans, putting them together in a building, and letting them run.  The really important margin for improvement lies in getting your humans to work together, doing it with strategies that scale, with strategies that are robust to unexpected events and individual points of failure.

LW-rationalism would sometimes talk about relevant issues, but only abstractly (coordination problems, game theory) or speculatively (plausible-sounding theories of group dynamics, from the armchair).  In the SSC era, there was a lot more talk about social stuff, but mostly as it related to politics.

The kind of knowledge I claim you’d need is more like management lore.  The kind of stuff you could learn from relatively responsible and moral people who’ve spent a lot of time running companies or government organizations, some of which is surely written down somewhere, but less systematically than one would like.  (The knowledge relevant to correcting individual judgment is much more systematic and accessible – just pick up Kahneman and Tversky – so this may be a case of looking where the light is.)

In the years since 2009, rationalist-branded organizations have focused on correcting individual judgment (e.g. CFAR) or applying corrected individual judgment (e.g. MetaMed), without (as far as I can tell) much focus on optimizing the organizations themselves, qua organizations.  Rationalist ideas about organizational effectiveness are still mostly unsystematic folklore and armchair speculation, much more so than rationalist ideas about individual effectiveness (or society-wide politics).

An extreme example of this asymmetry is Dragon Army, in which a CFAR instructor (who has presumably thought quite a bit about individual effectiveness) decides he wants more organizational effectiveness and designs an organization from scratch on the basis of a fictional organization, evincing little awareness that organizational effectiveness has been sought and achieved before in the real world and that its seekers might have useful guidance.

SSC brought a (welcome) appreciation of the importance of social-level roadblocks, but without much hope for solutions, at least not before the eschaton.  In 2009, Eliezer dreamed of a self-improving, world-improving community that would just so happen to invent an AI god as one of its many collective efforts; five years later, in Meditations on Moloch, we learn that the emergent dynamics of human groups will inexorably eat us and everything we love, and this will only stop when an AI god is (somehow) created.

(via nostalgebraist)

The Craft & The Community - A Post-Mortem & Resurrection →

oligopsoneia:

nostalgebraist:

This post is gigantic (I admit I only skimmed it), contains wild and massive claims made with little to no firsthand experience, could stand to have the personal elements more clearly delineated from the ostensibly objective-descriptive elements, and I don’t know if even half of it would withstand any kind of close inspection –

– but, once again, I’m very pleasantly surprised that the rationalist community is talking openly about why the “go out and change the world” thing did not happen, rather than sweeping it under the rug

(Now going beyond mere talk, there’s the part I’m not yet optimistic about)

There’s an assumption buried right near the beginning that Taking Ideas Seriously means “the Rationality Community” should be a center of agency, rather than a social scene that people hang out in sometimes because it’s fun (while fulfilling positive moral obligations and the like elsewhere.) This is just flabbergasting to me. 

I think that assumption makes sense because the post is springboarding off the Craft and the Community sequence, in which Eliezer dreams (in a way he admits is idealistic) about IRL rationalist groups that would serve as world-improving “taskforces,”  and/or develop rationality through experiment and make possible much higher magnitudes of rationality, and/or “identify as elements of the Common Project of human progress, the Neo-Enlightenment,” etc.

It’s all quite vaguely imagined and could be interpreted in various ways, but there was clearly a dream there that rationalists might start getting together IRL and doing big, fundamentally new things made possible specifically by being rationalists together in communities.

Clearly, this has not happened.  My opinion (and possibly yours?) is that it didn’t happen because there LW-rationalism wasn’t actually a recipe for doing big new things, or even a recipe for such a recipe.  The author of the linked post thinks it could have happened and only failed for a bunch of historically contingent reasons.  I don’t agree, but saying “it failed, but maybe we could try again?” is much preferable to not talking at all about the failure.

(via oligopsoneia-deactivated2018051)

The Craft & The Community - A Post-Mortem & Resurrection →

This post is gigantic (I admit I only skimmed it), contains wild and massive claims made with little to no firsthand experience, could stand to have the personal elements more clearly delineated from the ostensibly objective-descriptive elements, and I don’t know if even half of it would withstand any kind of close inspection –

– but, once again, I’m very pleasantly surprised that the rationalist community is talking openly about why the “go out and change the world” thing did not happen, rather than sweeping it under the rug

(Now going beyond mere talk, there’s the part I’m not yet optimistic about)

1109514775:

nostalgebraist:

More on tech hiring being possibly broken:

A pattern I’ve noticed, in friends and acquaintances seeking tech work, is this superficially strange conjunction of “your long-term prospects are very good” with “you will get many rejections.”  That is, in the area/industry/roles I’m most familiar with (mostly entry-level), everyone who’s qualified and who has the resources to seriously job-hunt for a long period of time will eventually get at least one appealing offer, where “eventually” means something like “within a year, often much sooner.”  But the first appealing offer typically arrives after a string of rejections – not just cold applications that go nowhere, which is to be expected, but strings of interviews lasting multiple weeks, some including onsites, which eventually lead to rejections.

The reason this seems strange to me is that if a person interviews with 8 companies, gets rejected from the first 7, gets an offer from the 8th, and ends up happy and productive at company #8, does that mean that companies #1-7 screwed up by passing on a good candidate?

When I ask people this question, the standard answer is something about “fit” – technical fit for the demands of a position, match between personal work style and the nature of the position, and “culture fit” for the whole company.  That is, perhaps companies #1-7 recognized that the person could do the job on a basic level, but found someone else who was better aligned with all the little particulars, whereas with company #8 the particulars all lined up.

Something about this explanation feels fishy to me.  For one thing, people are capable of adapting to different circumstances; it’s not like each person has exactly one type of environment they are capable of thriving in.  But also, I suspect that the fine nuances of “fit” are not actually discernible through interviews.  This is especially true for entry-level positions, where no one actually knows how well the candidate will do in one version of a job vs. another (the empirical test has not been done).  If you are hiring for an entry-level position, you are inherently taking a chance on whoever you hire, and allowing them to try something out and see if it works.  I’m wary of the idea that “we don’t think you’d thrive here, based on a few hours of conversation with you” is helpful to the candidate.  What would really help the candidate is to get a job, and then see if they thrive in it.

I suspect that, as with college admissions, the real story is that you have to choose between multiple qualified applicants somehow, and a lot of this stuff is reading signal into noise so as to convince yourself that your decision-making process is better than rolling dice, even if it isn’t.  There is no way to avoid having to make these choices, but I worry that all of the reading-tea-leaves stuff is a significant inefficiency.  The total number of interviews done by all candidates is necessarily equal to the total number of interviews conducted by all companies, so if candidates are unnecessarily doing 8 interviews instead of 1 or 2 or 3, that means companies also doing more interviews than they need to, and that has a resource cost for the companies.

I think part of what’s going on is that companies perceive the cost of a false positive to be very high, but aren’t terribly worried about false negatives. If they pass on someone who would have been great? Whatever. That’s fine. There’s someone else in line who is pretty much the same. But hiring someone into the company who can’t do the job is a huge strain on the workplace. You spend time onboarding them and have to decide how long to stick with them to see if they pull it together and you have other other employees picking up the slack of the work this person isn’t doing, or redoing the work they do poorly, and those employees then resent you for hiring this person, and then eventually you have to fire them and it’s all unpleasant and costly for everyone.

So I hear. I don’t really know. That’s just the justification I’ve heard.

Ah, that makes a lot more sense than the “fit” explanation.

Thinking about this a bit more: the cost of false negatives is spending more resources on interviews, and also having the position sitting around unfilled, when they want it to be filled.  The latter cost can be big or small depending on whether it’s a mission-critical job or just a “hey, might be cool to have someone do this” job, but for the more critical jobs the false positive cost is also higher.  These countervailing forces would help equalize the costs across different jobs, and encourage the setting of one single (high) bar for what it means to pass an interview (which would be convenient for its own reasons).  Hence the wide applicability of “you’re going to do a lot of interviews” as advice.

More on tech hiring being possibly broken:

A pattern I’ve noticed, in friends and acquaintances seeking tech work, is this superficially strange conjunction of “your long-term prospects are very good” with “you will get many rejections.”  That is, in the area/industry/roles I’m most familiar with (mostly entry-level), everyone who’s qualified and who has the resources to seriously job-hunt for a long period of time will eventually get at least one appealing offer, where “eventually” means something like “within a year, often much sooner.”  But the first appealing offer typically arrives after a string of rejections – not just cold applications that go nowhere, which is to be expected, but strings of interviews lasting multiple weeks, some including onsites, which eventually lead to rejections.

The reason this seems strange to me is that if a person interviews with 8 companies, gets rejected from the first 7, gets an offer from the 8th, and ends up happy and productive at company #8, does that mean that companies #1-7 screwed up by passing on a good candidate?

When I ask people this question, the standard answer is something about “fit” – technical fit for the demands of a position, match between personal work style and the nature of the position, and “culture fit” for the whole company.  That is, perhaps companies #1-7 recognized that the person could do the job on a basic level, but found someone else who was better aligned with all the little particulars, whereas with company #8 the particulars all lined up.

Something about this explanation feels fishy to me.  For one thing, people are capable of adapting to different circumstances; it’s not like each person has exactly one type of environment they are capable of thriving in.  But also, I suspect that the fine nuances of “fit” are not actually discernible through interviews.  This is especially true for entry-level positions, where no one actually knows how well the candidate will do in one version of a job vs. another (the empirical test has not been done).  If you are hiring for an entry-level position, you are inherently taking a chance on whoever you hire, and allowing them to try something out and see if it works.  I’m wary of the idea that “we don’t think you’d thrive here, based on a few hours of conversation with you” is helpful to the candidate.  What would really help the candidate is to get a job, and then see if they thrive in it.

I suspect that, as with college admissions, the real story is that you have to choose between multiple qualified applicants somehow, and a lot of this stuff is reading signal into noise so as to convince yourself that your decision-making process is better than rolling dice, even if it isn’t.  There is no way to avoid having to make these choices, but I worry that all of the reading-tea-leaves stuff is a significant inefficiency.  The total number of interviews done by all candidates is necessarily equal to the total number of interviews conducted by all companies, so if candidates are unnecessarily doing 8 interviews instead of 1 or 2 or 3, that means companies also doing more interviews than they need to, and that has a resource cost for the companies.

On the topic of stuff like that “hoops” story – I’ve been reading some blog posts (e.g. 1, 2) about how tech industry hiring practices are broken.  I have various thoughts about this stuff.  Here is one:

I’ve always found “application processes” – applying to college, REUs, grad school, jobs – difficult in a unique way.  Not the most difficult thing in life, but difficult in a particular way nothing else is.  Mostly, the difficulty is concentrated in the parts of the process that are more intuition-based, less by-the-book, like college essays, research statements, interviews, cover letters, etc.

The special reason these things are difficult is that they are special forms of writing or interaction which don’t exist outside of application processes.  They’re supposed to be proxies for things that do exist outside of application processes, and complaints about e.g. interview practices often focus on how standard interviews aren’t very good proxies, and things like work samples are better.  One reason this might be true is that work samples are less proxy-like, much closer to a direct observation of the desired trait.  This seems plausible, but I don’t think it’s the whole of the problem.  What makes this category of proxies especially bad is that they are testing a skill which is never exercised “in the wild,” and which people only practice to get better at applications.

The college essay, for instance, is a literary form unto itself, but one that is never written except when people are trying to get into college.  This means that the form is practiced almost entirely by neophytes.  With other literary forms, you can show people your work and get feedback, and progressively improve that way.  With college essays, you will probably never get any feedback from the people who response really matters, i.e. admissions officials.  You can solicit feedback from friends and family, who are neophytes at the form as much as you are, or from paid tutors, who may or may not have any direct knowledge of what admissions officials want (as opposed to educated guesses).

It is as if poetry only existed as a part of college applications, and so everyone writing a college application had to reconstruct the aesthetics of poetry by themselves, with input from people who have at most written one or two poems in their lives.  You’d write something in rhyming iambic pentameter, say, and then look at it and think “no, that sounds too goofy,” and then, “wait, but maybe that’s just how poetry is supposed to sound?”  And it would be hard to answer that question, because you’d be in a culture where no one cared about the answer except people applying to college, and only insofar as it impacted their college prospects.

Interviewing isn’t as extreme as this, but it has the same quality.  An interview, like a college essay, takes something that might be a good proxy (“writing essays” / “having a conversation”) and formalizes it.  But the formal version is expected to bear a lot more load than these things usually do.  With an ordinary brief essay, you’re just trying to achieve one or maybe a few goals of your own choosing; in a college essay, you have the same small amount of space to show that you’re a good writer and unique/different and generally smart and a good fit for the college etc.  In an ordinary (say) technical conversation with a colleague, you can afford to concentrate on the technical stuff and not micromanage your affect and body language; in a technical interview, you want to get the right technical answers and convey that you’re a good conversationalist and generally smart in a vaguely defined way (“we just want to see you think,” they say) and that you have good “culture fit” etc.

These tasks are asked to bear more load than the real-world tasks they were adapted from, and as a result, they become their own sui generis things, different from the original tasks.  Whatever makes someone good at writing college essays, it’s not just being good at essay writing.  (When I was applying to college, my mother bought a book called something like “50 Successful Harvard Admissions Essays,” and I read some of them and was struck by how bad they all were.  But I was considering them as essays, which wasn’t the point.  They were trying to do the impossible task set for them, and ended up overstuffed with signifiers of smartness/authenticity/quirkiness/hardship-overcoming/etc., which made them seem smarmy and insincere and left the reader with no room to breathe.  But if you don’t overstuff the essay with these signifiers, how will the admissions team know that you’re smart/authentic/quirky/an-overcomer-of-hardship/etc.?)

These are not natural forms of human activity.

I was watching a video at 2x speed and thinking “why does this make everything look like a toy model?”, and then I thought “speeding up time makes all the velocities faster, and it makes them change faster, so things look like they have smaller masses because they have larger accelerations under the same forces (the forces inferred by the viewer from the appearance of the scene)”

This is cool and something I’d literally never thought about before

horrorjapan:

beachdeath:

“the CIA is releasing tens of thousands of files and videos from bin laden’s compound today, except his DVDs of ‘home on the range’ and ‘ice age: dawn of the dinosaurs’ and his copy of final fantasy vii, because those are copyrighted” is not a sentence i ever thought i would type, but 2017 continues to be full of surprises

Final Fantasy, not even once.

(via blackblocberniebros-deactivated)

robotsareonlysometimesright replied to your post “PayPal once rejected a candidate who aced all the engineering tests…”

I found the lecture this is from, and, huh. In context it isn’t framed as a criticism at all, it sounds like Levchin and Thiel are genuinely arguing this was a sound decision because the candidate would have said he was going to play hoops and his coworkers would have been put off and where would team cohesion be then? Astonishing.

Yeah.  TBH I haven’t looked at the source in any detail – it was linked in a blog post I was reading, and I skimmed it trying to find the reason the blogger linked to it, and then I hit that paragraph and was like wait, what?!, so I went back to tumblr, #quoted it, and then decided to close the browser in search of productivity

PayPal once rejected a candidate who aced all the engineering tests because for fun, the guy said that he liked to play hoops. That single sentence lost him the job. No PayPal people would ever have used the world “hoops.” Probably no one even knew how to play “hoops.” Basketball would be bad enough. But “hoops?” That guy clearly wouldn’t have fit in. He’d have had to explain to the team why he was going to go play hoops on a Thursday night. And no one would have understood him.