Install Theme

Sorry that Frank was down for a while this evening…

I’ve been having a lot of issues lately where my http requests to tumblr will just hang indefinitely, without the python requests/httplib libraries raising any kind of timeout-related exception.

When I ctrl-C the script and restart it, it works again, but I don’t know how to simulate this manual step automatically, since I’m not given an exception to handle. Haven’t seen this before like, this week? Or not often anyway. I don’t know enough about web programming to know what the cause is likely to be, either.

Haven’t read “Homestuck 2″ (lol) yet… given that the Epilogues were good and appeared to end everything on a high note, I’m worried that there’s nowhere to go but downhill (and, what’s more, going sharply downhill after a legitimately great quasi-ending would be very on-brand).  But we’ll see

Come to think of it, it was helpful when people told me I should read the Epilogues after I’d originally decided to ignore them, and similar advice (in either direction) would be likewise helpful here

youzicha:

nostalgebraist:

transmemesatan:

reading up on roko’s basilisk and it is even dumber than i could possibly have imagined.

it literally falls apart the instance you ask “wait, if it’s already been built, what does actual punishment accomplish that the hypothetical threat of punishment does not.” surely an ultimate superintelligence would have better things to do with its processing power than ensure its construction after it has been constructed.

I mean the idea is that if you followed through on this logic, you wouldn’t do things for fear of the threat, so it wants you to believe that it actually will punish you even if that seems “pointless”

It’s the usual logic of deterrence – if the government says “hey criminals, we’re totally gonna punish you” but also “once we’ve caught a criminal it’s pointless to punish them, they already did the crime, no changing it now,” then people will think “OK, I can commit crimes and not be punished, cool”

There are other giant flaws in the Basilisk though.  For one thing it assumes that you can predict the reasoning of a superintelligent being – you aren’t really “talking to” the actual AI that might be built, you’re talking to an idea of it in your head, which has certain priorities etc.  But the kind of AI these people are talking about is supposed to be so much smarter than a human that this is like a mouse thinking about its idea of a human – “yeah, so they’re going to be so smart, they’ll probably be really good at grooming their fur, like maybe they have advanced paw motions that I couldn’t think of?  And they’ll leave urinary odor cues for social communication, right, but they’ll be really clever about where and when they urinate?”

Plus it assumes that the eventual AI will have access to your actual brain.  This is not an uncommon assumption among people who think that superintelligence is imminent and that they will be “uploaded” within their lifetimes, but the people you might be donating money to for Basilisk reasons haven’t had much progress to show for ~10 years of work, and AI hype has been saying “just wait 20 years” for 50-60 years, so

I’m not super convinced by the “you can’t predict a superintelligence” objection.

For one thing, it’s a common observation that in strategic interactions, you often want to be predictable, so that other people can coordinate with you. Even if you are not not super-intelligent, it is easy enough to behave in completely unpredictable ways (e.g. by determining your actions by a cryptographic random number generator), but doing so has no deterrent effect. Rather, rational actors tend to try to behave according to simple rules (”if you launch your nuclear missiles on us, we will launch ours on you”). The usual thought in LessWrong-circles is that the same things would be even more true for super-intelligent computer programs; although they could be arbitrarily complicated, they might want to self-modify in a way that makes them easy to analyse by other actors (”program equilibrium”). In particular, if the future computer wanted influence us, it would want to pick a simple enough policy that would could understand it.

But second, Roko was not talking about some arbitrary super-intelligent AI dropping down from the sky. The shape of his argument was: “here is a proposed candidate for the AI’s objective function (CEV). It may seem superficially appealing, but actually, if you ran a computer with it, it would end up torturing us. So this is a bad function, and we should look for something else (in particular, some objective that doesn’t just look for goodness in the world, but also dislikes blackmail)”. I think you can criticise this for not correctly analysing what a “goodness maximizing” computer would do, but you can’t say that we don’t know which priorities a super-AI would have. The priorities (CEV) are given as part of the thought-experiment.

I think I get the point about wanting to pick policies that someone else would understand, but I think we still have the problem that we’re not observing the superintelligence, we’re imagining it.  That is, we haven’t seen it present a specific policy that we can understand, we’ve said “here’s one policy we can understand, it seems like it might present it.”  But we don’t know whether it would, and since it now seems empirically like a bad idea (I’m not aware of anyone positively motivated by the basilisk, plus it has created bad PR for the people advocating for this approach to AI) … 

I’ve never understood what CEV is, except for “good stuff, but like, formalized?”  Is CEV specified precisely enough that problems like this can be identified?  Or is this just “we’re utilitarians, so we want the machine to be a utilitarian, but wait, utilitarianism can have disturbing implications”?

(”Wait, the unfortunate person in the dust speck thought experiment is me?  Better add a loophole to fix that”)

(via youzicha)

brazenautomaton:

nostalgebraist:

91625:

nostalgebraist:

Apparently there was an MSPA newspost, and Andrew Hussie is saying he’s hoping to finish Homestuck on 4/13/2016, which he emphasizes is “the 7 year anniversary of Homestuck”

That particular number gives me a certain kind of vertigo – like, I got into Homestuck in summer 2011, and back then, when it had existed for a little over two years, basically all of the core content was already there.  Act 6 contains many pages, and has taken a very long time to write, but a lot of it is just filling out templates that were established in 2009-2011.  All the stuff people have argued about, all the typologies and the wild concepts, were pretty much in place five years ago.

It feels like “Homestuck” was a thing that happened a long time ago, and is now over and done with.

I wonder if Hussie likes what he’s doing with himself these days, or if he just doesn’t have the option to stop.

I have literally just started reading Homestuck (largely thanks to you, fwiw). What am I letting myself into?

Basically, it’s one of those works of serial fiction with a whole lot of great moments, plot threads and characters, that keeps stringing you along with the promise of more and more, and often actually delivers, but keeps promising even more, and then just delivers less … and less … and less … and as you get closer to what should be the thrilling conclusion it just feels like the author barely cares anymore, or is actively frustrated with having to finish the series.

The parts where it was really in full swing are Acts 1-5.  Act 6 is gigantic and mostly a letdown.  I’d recommend reading Acts 1-5 and at least dipping into Act 6 but putting it down if you start to feel like Act 6 is going nowhere, because it is.

(I’m honestly not sure how much any of this is controversial anymore?  I was an early Act 6 hater and got into various internet arguments about it.  But I think it let down everyone eventually.)

I call it “Zeno’s Paradox pacing” and stand by it

every year (let’s say), the story progresses halfway toward its end

so after a year, the story was half over. lo

after 2 years, it was 75% over.

3 years, 87.5% over. 

after 4-7 years, come sweet Death.

it always seems like it’s right about to wrap up, but it never will. remember Hussie saying that act 5 was the overgrown mutant of Homestuck, that act 6 would be far shorter and act 7 barely an epilogue? we will be having this conversation in 2077.

(via brazenautomaton)