Install Theme

afloweroutofstone:

For those who don’t know, this is because one openly right-wing media company, Sinclair Broadcasting Group, has bought up tons of local news stations and is now the largest television station operator in the country, and are putting it to use

(Source: twitter.com, via afloweroutofstone)

I didn’t actually know until recently that the character “Simplicio” in Galileo’s Dialogue Concerning the Two Chief World Systems was based on the Pope, who up until that point had been Galileo’s friend and ally, and who had asked Galileo to include his own views in his book

Like, I’d be pissed off too if I asked the guy to discuss my opinions in his book, and then read the book and found them in the mouth of a character named, roughly, “Dumbass”

There Is No Case For The Humanities →

nostalgebraist:

deusvulture:

I encountered this post on tumblr, being quoted in a dumb inflammatory way to get notes, so I’m reposting it here to preserve for my own benefit. Essentially makes the (100% correct) case that the true “justification” for the humanities is nonexistent – they’re an end in themselves in many people’s value systems (including mine), and will therefore be fought for… and that’s all there is.

Yeah, there’s nothing really new in this article, but it does make the case I always want to make against all the other writing I see on this subject.

Okay, to be fair, I’d never heard of Ramism before reading this:

As teachers, what humanists want most of all is to initiate their students into that class. Despite occasional conservative paranoia, there is not some sinister academic plot to brainwash students with liberal dogma. Instead, humanists are doing what they have always done, trying to bring students into a class loosely defined around a broad constellation of judgments and tastes. This constellation might include political judgments, but is never reducible to politics. It is also very susceptible to change. For two hundred years or more, European universities were deeply enmeshed in the pernicious stupidity of Ramism, with Ramist professors installed across Europe in any number of the humanistic disciplines. Eventually the fad dissipated, and today, the celebrated method of Petrus Ramus holds little more than antiquarian interest. We should not assume that the current modes and fashions of the academic class are permanent. But if they are to change, that change will come from the inside.

(via nostalgebraist)

There Is No Case For The Humanities →

deusvulture:

I encountered this post on tumblr, being quoted in a dumb inflammatory way to get notes, so I’m reposting it here to preserve for my own benefit. Essentially makes the (100% correct) case that the true “justification” for the humanities is nonexistent – they’re an end in themselves in many people’s value systems (including mine), and will therefore be fought for… and that’s all there is.

Yeah, there’s nothing really new in this article, but it does make the case I always want to make against all the other writing I see on this subject.

(via resinsculpture-deactivated20221)

So, this is going to sound way darker and edgier than I intend, but – I honestly don’t know what people mean when by “forgiveness”

When someone wrongs me, it hurts more at first, and then less later on, sure.  I think of that as closely analogous to a physical wound healing, with about as much moral significance (none).  And yes, I do try to give people second chances, but that doesn’t depend on overcoming my disapproval of first offenses.  (I still disapprove of them, just not enough to preclude a second chance.)

I understand the idea that you have to stop letting anger consume you after a point, and that anger can lead you to wish truly horrible things upon a person for a temporary period before you mellow out and remember that they’re human like everyone else.  But both of these strike me as entirely internal processes, about taking proper care of your own mind.  Yet people talk about “forgiveness” as something you do to someone, or an opinion you express – like it’s some sort of open acknowledgement that the other party’s wrongdoing was different than you’d previously thought/felt, if not necessarily not wrong.

Again, I just think about these things like physical wounds.  If I burn my hand on a stove, it may hurt for a time, and in that time I may be especially wary of touching stoves.  Then, as the wound heals and I forget about it, I may (or may not) lose some of that wariness.  But there’s never any discrete point where I’m like, “UPDATE: hand-burning potential of stoves less of a big fucking issue than I was previously making it.  Stoves and I are cool now.”  I still know heated stoves can burn my hand, and that never becomes less true!  I may forget this property of stoves, occasionally, but I never forgive stoves for it, whatever that would mean!

4point2kelvin:

nostalgebraist:

I can’t imagine I’m the first person to have this idea, but: I’m starting to think that, at least with currently existing technology, it’s always a bad idea to think of your software as an “agent” instead of a “tool.”  And, on the flipside, that many (if not all) useless “agents” could be repurposed into useful “tools.”

The distinction I’m making is between two types of software that try to save you time:

A “tool” saves you time by giving you a (literal or figurative) “button” you can press (could be a command line string, whatever) which will trigger a constrained, broadly transparent, broadly predictable (if perhaps quite complicated!) string of automated actions, which you then won’t have to do yourself.

An “agent” instead tries to do some task for you entirely on its own, up to and including deciding when the “button” should be pushed, and then pushing it.  A tool is never used by default, only when you push its button (perhaps by implication, through pushing the button of a higher-level tool).  Agents push their own buttons, whether you want them to or not, and usually the only way to push back against an agent that is behaving annoyingly or uselessly is to turn it off entirely, depriving yourself of all its functionality even when you do want it.

Agents and tools are often quite similar in their actual capabilities, and in what they do after the button is pushed.  But agents are more opaque to the user, often on purpose (to make them seem “smart” and/or effortless to operate).  And the model of time-saving is subtly, but importantly different in the two cases.

A tool wants to multiply your capabilities.  It wants to let you do more in any given ten minutes by giving you a button that will instantly do something that used to take you (say) five minutes.  Now, every time you want to do that thing, it’s as if you’re getting five extra minutes for free.

An agent wants to replace you.  It wants to let you do more in any given ten minutes by making a conceptual breakdown of your work, completely automating a subset of it, and posing as a new coworker who handles that entire subset so you don’t have to.

Why are agents worse than tools?  First, because we’re really good at making computers do complicated-yet-constrained tasks, but we’re really bad at making them anticipate our needs.  “Deciding when to push the button” is usually very hard to automate – and strangely pointless to automate, too, when it takes all of half a second to push one.

And second, because the space of conceivable, useful tools is much larger.  As long as you leave room for some human volition, you can have all sorts of great ideas that can multiply human productivity by orders of magnitude.  If you insist at the outset that the human is going to be cut off from the system, then you won’t even think about any of the ideas that necessarily involve a human participant, even if they’re great ones.

Think about compilers and interpreters.  Once upon a time, you (more or less) had to write byte code if you wanted to program a computer.  These days, the journey from “concept in your head” to “code you can run” takes orders of magnitude less time, because we now have tools that will automatically translate much higher-level descriptions into byte code.  After their buttons are pressed, these tools do all sorts of very complicated, very fancy things entirely on their own, in a way that is quite “smart” if you want to frame it that way – it seems to me that GCC and LLVM are as worthy of the title “AI” as anything on the market these days.  But these things don’t pose as coworkers or assistants, they only run when you tell them to, and they limit their behavior to an easily comprehensible scope.

Imagine if people in the days of byte code had thought about “automated programming” in the agent model instead of the tool model, with the goal of entirely replacing parts of the programmer’s workflow.  Would they have invented programming languages at all?  “Why translate into byte code from a language humans find easier, when the goal is to write code without the human needing to lift a finger?”

Compilers and interpreters are complex tools, and they are wonderful.  Are there any agents that are similarly wonderful?  When a software feature is marketed as being “smart” (which seems to be a term of art for “agent”), doesn’t that usually mean “useless”?

(The phone app that came with my sleep tracker has a feature called “Smart Coach.”  Each day, in semi-random fashion, it gives me a new piece of advice based somehow on my recent data.  The software capabilities behind the advice look like they might well be very useful to me, but they have been rendered useless by wrapping them in an agent.

“Smart Coach noticed you tend to sleep more on weekends.”  Okay – but how much, and over what time period, and is there any reason you told me that just now?  An ability to see my data averaged by day-of-week (which the app is already computing, apparently) would be so much more useful than Smart Coach.  “Smart Coach noticed you got less deep sleep last night than usual for your age cohort.”  Okay, so how much deep sleep does my age cohort get, and just how much less am I getting?  The developers put lots of interesting information at my fingertips, and then systematically hid it from me, because they wanted an agent.)

High-frequency stock traders. Airplane autopilots. Thermostats. Industrial control systems of all sorts. Game AI. Roomba. Ad-serving AI.

Is an ATM a “tool” I am using at some specific time to make a transaction, or is it an “agent” that the bank started running in the past to act on its own? The distinction is somewhat fuzzy.

The key to identifying useful agents, I think, is divorcing this category from “things built to pretend to be people.” The things we have built to pretend to be people must pretend to be agents, because people are (on average) agents, but the things that actually generate value by replacing human decision-making tend not to have to pretend to be people.

Yeah, I admit the distinction is pretty fuzzy, especially when we are looking at examples of well-established, successful automation.  Upon reflection, framing this as a dichotomy might be more confusing than helpful – although I still think there is an important concept there, and so I’ll look for a less dichotomous way to phrase it.

The principle could perhaps be phrased as, “don’t be afraid to prompt the user for input at points where you don’t yet have a good replacement for their decisions; fight the temptation to automate even at these points, just so you can say the thing is fully autonomous and can run without a user at all.”

Thermostats might be a useful example here.  Before thermostats, when you had to exert a lot of manual control to keep your house at the right temperature, it would of course have been pleasant to fantasize about a machine that would “do all that work for you.”  At this point it would not have been obvious that a thermostat was what was wanted – that is, a device that still relies on you to tell it the desired temperature, and then automatically adjusts the furnace.  Instead, one might well have just fantasized about a machine, any machine, which would “handle the furnace” for you.

But “handling the furnace” splits up into a part that is straightforward to automate – the thermostat – and a part that is not – deciding on the target temperature.  This is perfectly fine, because you can eliminate all of the tedium (manually stoking a furnace and adjusting its valves and dampers) by just automating the first part.  But this does require giving up on the fantasy of a machine that would simply “handle the furnace” for you and let you forget it even exists.

You have to give up on the fantasy of something that literally runs itself to achieve the reality of something that more or less runs itself, and I think that’s true in a lot of cases.

(via 4point2kelvin)

napoleonchingon:

napoleonchingon:

Lying to yourself is easy. All you have to do is not think about the things that would expose you as a liar. People do this all the time. Or you can think about them once, and then quickly hide them behind a rationalization that lets you follow a well-trod path whenever the need to think about these inconvenient things comes up.

Lying to yourself while thinking about the things that would expose you as a liar is actually not that hard, either. I used to do this often when doing anything that taxes endurance. I would convince myself to keep going by offering a fake reward (e.g., a chance for an upcoming break) You have to remember that you’re lying, or else you might take the break, and the strategy won’t be effective. But you also need to believe yourself, or else you won’t be convinced. It’s totally possible to trick yourself in this way, though.

Lying to yourself while thinking about the things that would expose you as a liar all the time, continuously, while external stimuli reinforce the message that you are a liar is extremely hard. This is generally a good thing, but sometimes, it’d be nice to be able to trick yourself. My life is best when I think I am the master of my fate (actually I am not), democracy works best when the outcomes of elections are treated as fair and representative (actually they are not), etc. etc.

To bring this back to Jordan Peterson, what I want to say is…

So let’s try to bring this back to Jordan Peterson, whom, to warn you into not taking this too seriously, I’m mostly aware of through cultural osmosis. We have this meme in our culture that taking the blue pill is the coward’s choice. That the upright individual would ignore the allure of the blue pill as the easy way out and take the red pill every time. But this idea is often a very poor description of reality.

Taking a newly discovered truth about what is going on in the world and letting it radically change your worldview isn’t necessarily that difficult or that noble. And, because we don’t actually have a means to totally, deterministically forget a viewpoint that is presented to us, taking the blue pill isn’t really equivalent to how it’s presented in the Matrix series.

Consider any set of facts about modern life that can lead to despair or to post-left helplessness. And then, while keeping them in mind, live the best life that you can as if the world was fair and you mattered. It can’t be presented in this way, though. It’s no use to say “the personal is political, but in most cases please act as if that weren’t true, since it will lead you to be happier”. No one wants the message to be presented as a blue pill. Peterson’s allure and success seem to me to stem from his ability to dress up the blue pill message as a new revelation – a different red pill.

And I think that can explain the totally different reactions to Peterson that we’re seeing. If you concentrate on the parts of his message that are supposed to be a revelation, you end up surprised and kind of offended that anyone takes him seriously. But if you take what he thinks these revelations should lead you to do, it ends up being inspiring.

#that and he’s a huge dick which excites a lot of people

In general I have a strong disinclination to add even words about Jordan Peterson to the internet, but I did want to reblog this take because it’s the most illuminating (or at least illuminating-seeming?) one I’ve seen

(via sungodsevenoclock)

I can’t imagine I’m the first person to have this idea, but: I’m starting to think that, at least with currently existing technology, it’s always a bad idea to think of your software as an “agent” instead of a “tool.”  And, on the flipside, that many (if not all) useless “agents” could be repurposed into useful “tools.”

The distinction I’m making is between two types of software that try to save you time:

A “tool” saves you time by giving you a (literal or figurative) “button” you can press (could be a command line string, whatever) which will trigger a constrained, broadly transparent, broadly predictable (if perhaps quite complicated!) string of automated actions, which you then won’t have to do yourself.

An “agent” instead tries to do some task for you entirely on its own, up to and including deciding when the “button” should be pushed, and then pushing it.  A tool is never used by default, only when you push its button (perhaps by implication, through pushing the button of a higher-level tool).  Agents push their own buttons, whether you want them to or not, and usually the only way to push back against an agent that is behaving annoyingly or uselessly is to turn it off entirely, depriving yourself of all its functionality even when you do want it.

Agents and tools are often quite similar in their actual capabilities, and in what they do after the button is pushed.  But agents are more opaque to the user, often on purpose (to make them seem “smart” and/or effortless to operate).  And the model of time-saving is subtly, but importantly different in the two cases.

A tool wants to multiply your capabilities.  It wants to let you do more in any given ten minutes by giving you a button that will instantly do something that used to take you (say) five minutes.  Now, every time you want to do that thing, it’s as if you’re getting five extra minutes for free.

An agent wants to replace you.  It wants to let you do more in any given ten minutes by making a conceptual breakdown of your work, completely automating a subset of it, and posing as a new coworker who handles that entire subset so you don’t have to.

Why are agents worse than tools?  First, because we’re really good at making computers do complicated-yet-constrained tasks, but we’re really bad at making them anticipate our needs.  “Deciding when to push the button” is usually very hard to automate – and strangely pointless to automate, too, when it takes all of half a second to push one.

And second, because the space of conceivable, useful tools is much larger.  As long as you leave room for some human volition, you can have all sorts of great ideas that can multiply human productivity by orders of magnitude.  If you insist at the outset that the human is going to be cut off from the system, then you won’t even think about any of the ideas that necessarily involve a human participant, even if they’re great ones.

Think about compilers and interpreters.  Once upon a time, you (more or less) had to write byte code if you wanted to program a computer.  These days, the journey from “concept in your head” to “code you can run” takes orders of magnitude less time, because we now have tools that will automatically translate much higher-level descriptions into byte code.  After their buttons are pressed, these tools do all sorts of very complicated, very fancy things entirely on their own, in a way that is quite “smart” if you want to frame it that way – it seems to me that GCC and LLVM are as worthy of the title “AI” as anything on the market these days.  But these things don’t pose as coworkers or assistants, they only run when you tell them to, and they limit their behavior to an easily comprehensible scope.

Imagine if people in the days of byte code had thought about “automated programming” in the agent model instead of the tool model, with the goal of entirely replacing parts of the programmer’s workflow.  Would they have invented programming languages at all?  “Why translate into byte code from a language humans find easier, when the goal is to write code without the human needing to lift a finger?”

Compilers and interpreters are complex tools, and they are wonderful.  Are there any agents that are similarly wonderful?  When a software feature is marketed as being “smart” (which seems to be a term of art for “agent”), doesn’t that usually mean “useless”?

(The phone app that came with my sleep tracker has a feature called “Smart Coach.”  Each day, in semi-random fashion, it gives me a new piece of advice based somehow on my recent data.  The software capabilities behind the advice look like they might well be very useful to me, but they have been rendered useless by wrapping them in an agent.

“Smart Coach noticed you tend to sleep more on weekends.”  Okay – but how much, and over what time period, and is there any reason you told me that just now?  An ability to see my data averaged by day-of-week (which the app is already computing, apparently) would be so much more useful than Smart Coach.  “Smart Coach noticed you got less deep sleep last night than usual for your age cohort.”  Okay, so how much deep sleep does my age cohort get, and just how much less am I getting?  The developers put lots of interesting information at my fingertips, and then systematically hid it from me, because they wanted an agent.)

wizardjpeg:
“ an-australian-lungfish:
“ artist-varo:
“Gravity, Remedios Varo
”
@wizardjpeg you know this guy
”
that’s my good buddy: long thomas
”

wizardjpeg:

an-australian-lungfish:

artist-varo:

Gravity, Remedios Varo

@wizardjpeg you know this guy

that’s my good buddy: long thomas

(via wizardjpeg)

Obayashi recalled that his producer told him that Toho was tired of losing money on comprehensible films and were ready to let Obayashi direct the House script, which they felt was incomprehensible.