1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
|
2023-03-12
grimes v. e/acc
<https://twitter.com/Grimezsz/status/1633685857206157313>
<https://twitter.com/BasedBeffJezos/status/1633732019514646529>
this transcript: <https://twitter.com/kanzure/status/1635134315410628609>
last one: <a href="https://diyhpl.us/wiki/transcripts/2023-03-03-twitter-bayeslord-pmarca/">bayeslord v. pmarca (Marc Andreessen)</a>
build with hplusroadmap on discord: <https://twitter.com/kanzure/status/1615359557408260096>
----
bayeslord: What's up everyone. Today I want to talk to you guys about timeshares.. have you guys ever heard about timeshares? Not really. There are going to be timeshares in the singularity. We will securitize the singularity, bypass the SEC and FDA by using crypto. Everything is going to run, obviously, on the shrimp hypernetwork. We're going to do a thing where we talk a little bit at the beginning, and then we will do the second half we will do some more conversational.
grimes: I'm here with Chelsea Voss. Thanks for having us.
Beff: Thanks for taking the time.
grimes: This is very entertaining to us. Also we're in an ongoing internet battle with you guys. teehee. I'm in a battle with beff, not in a battle with you.
bayeslord: Oh are we?
grimes: It's a good natured joust, perhaps. We might also have the same opinions. But if so, I want to workshop some thoughts with you.
Beff: Yeah let's do it. This is why we fired up this space.
bayeslord: Just to start, it would be great to, maybe you have already started to do this... maybe we can just try to lay out what we think is the central points of tension here? If we're having a battle, there's some point of tension, right?
grimes: There's not that much tension. I want to say that people keep characterizing it as more of a- people seem upset and emotional. I feel like it's differences of rhetoric essentially that are happening. I just have thoughts about what I think is healthy internet discourse which are probably overly nitpicky. I think, healthy debate. I would say, an effort towards-- I would say there are sort of cults forming and hysteria forming that is not in us talking with each other but surrounding the conversations. I think that could get really unhealthy really fast like we have seen in historical past and one thing that happened recently. For the community to be healthy, I think we should workshop certain things mostly about rhetoric.
bayeslord: Let's just aim straight at it. What do you think?
grimes: I've never done a twitter spaces before.
Beff: Welcome.
Beff: This is where e/acc was born. It was just hanging out in weird twitter spaces at late night. What does it all mean? Where are we going as a civilization? We took some notes down, and the rest is kind of history. It was originally just us writing down some thoughts. Eventually a community formed and it was mostly based on vibes and not taking ourselves too seriously. It was memetic. I think we do have some serious core thoughts behind the philosophy. I know that a lot of our posts are very shitposty and memetic, it is you know, it's just playing the algorithm you have to have high memetic fitness to survive the cultural landscape that is algorithmically amplified. Even with a shitposty memetic package, there's something thoughtful at the core. I think with the podcast and longer form audio, especially with twitter spaces, maybe we can have proper debates and go into details and nuances. I don't think a few character tweet can ever do extremely subtle debates justice. I just want to preface that, you know, it's not just all you know stupid memes. There's some thought put into our position.
grimes: Upon any individual conversation, I actually think you guys and me tend to be on the same page. I respect. Maybe on the same page but iwth minor disagreements. We're pretty much solid. I think we should debate the memetic fitness thing. One thing I can't speak to or fairly unwilling to speak to is AI ethics. I feel like I don't have enough data at this point. I don't want to also spread more bad memes than I've already spread. I am less keen to speak about that. I think there might be some short-term memetic fitness to this sort of extreme anti-doomer takes and things like this, or the memes where our stuff gets distilled in doomers vs accelerationists nothing in between is somewhat destructive to the discourse. I wanted to get into that first. I think that for people who are really deep in it, then it's of course easy to break these things down and make our own conclusions. I see a lot of people really emotionally straightforwardly repeating some of these points. It's getting... outside of you guys, it's distressing a lot of people and a lot of people are misunderstanding it. How memetically fit is it actually if-- one of the things you don't want is top-down government regulation and some of the things you are saying is creating the hysteria that could create top-down government regulation. I think tripton haris was talking to the government the other day. You are freaking people out. You might create a self-fulfilling prophecy where you create more regulation rather than preventing it. I wanted to bring up that point and discuss it.
Beff: has AI accelerationism come into the discourse at a regulatory level?
grimes: I think it's about to. I saw AOC bringing it up last year. I think it was like luckily not super memetically fit at that time so it melted into nothingness. I think you're in danger of it.
bayeslord: how is it characterized in those contexts? I really haven't seen anything like this that pattern matches to accelerationism.
grimes: I don't think it's addressing you guys. But you guys saying stuff like, when we talk about, get off the train or you know try to stop the train is futility... those are the kinds of things that are causing people to panic at non-profits and agitate for government regulation. I personally know a lot of people and beg a personally powerful lobbyist to not go for top-down regulation very recently specifically because of stuff you guys were saying. I think that's the kind of thing you need to be careful about and mindful of. Powerful people are watching this.
bayeslord: This is interesting. There's two angles here. One consequence is you- from a strategic perspective, your strategy works to sort of secure the path you want civilization to take or it backfires and creates a reactionary force out there like some lobbyists or policy people get freaked out and maybe they do some successful regulations. So you think the discourse is bad in part because of that potential consequence? Is that the only thing?
grimes: My other thing is just, not stirring up the fear. I know this is a thing you talk about, a fear is itself a self-fulfilling prophecy. The other thing is that, this is like maybe dumb or maybe just me being sentimental but when everyone is harping on Eliezer and stuff-- it's just kind of mean. He's not without points even though I'm pretty anti-doomer. I think there could be less regina georgeness... I think more people in this space would feel more comfortable if there was less harassment of Eliezer and again that's probably me being a sentimental asshole but a main thing that causes movements to fail is if they do not seem hospitable to friendly discourse especially if they seem aggressively male dominated or anything like that. If there's too much dunking, then that can create a situation where you have a thing that is not being adequately nurtured. It might be healthy to consider slightly less dunking. Some dunking is great, I'm totally down for dunking and memes but it gets a bit... obviously Eliezer is an adult and he hasn't said to me that his feelings are hurt but I've been on the other end of everyone dunking on me. Maybe we should never do that. It's kind of a mental health thing to consider. That might be me being a sentimental dick here.
Beff: One of the reasons we started e/acc is that I think the EA or doomer mind virus does a lot more damage than people like to acknowledge. Some people get depression or suicidal from the thoughts that they get from being part of that community. For us, if there's one vocal leader that is their one true leader, that is putting out their message at industrial scale like in the past few weeks then yeah we want to counter it and we're going to attack it because we think it's net negative. It's not necessarily about personal attacks but we can ridicule the thought process that goes behind some of these really hyperbolic predictions about the future. Let's point out how ridiculous they are. The laws of physics and first principles of applied machine learning show this. For us, it shows that if something causes great fear then ridiculing can be good to calm people down. I guess our point is that people shouldn't be afraid of the future. It's going to be awesome. We should lean into it, rather than be doomerpilled and want to either like stop all civilizational progress or just move to the mountains and wait for AGI to take over and be really depressed or whatever. I think there's another path forward. We're trying to show people an optimistic path where we build towards a greater future and be optimistic about the future. For optimism, first we have to neutralize the doomerist mindvirus especially in recent weeks as it got really really popular. To be fair, I like how-- we were really the underdogs. We're just some anons on twitter, ex-big tech employees, we're nobody. And eliezer has been a big deal for a long time and I like how we're the ones that have to chill out when they are the ones that control the minds of executives at multi billion dollar companies that have control over the future of AI. There's a massive power asymmetry between the accelerationists and the EAs. I don't think we will want to chill, to be honest. This fight is worth it. If we don't fight our fight, then it's just unstoppable monopoly because there's a coercion of the movement that means well for AI safety by these totalitarian oligopolists that want to co-opt these movements to argue for regulatory capture. That's their whole goal, right? You either unoppose those movements and let these oligopolies perform regulatory capture or you create a resistance. That's what we're doing.
grimes: One thing I'd like to think about more is a middle way. I really agree with you. I'm anti-doomer. I think it's unhelpful. I love when they go into technical ways things could go wrong. It's good to analyze worst case scenarios. I think there's a bit of a cult thing going on, and I agree with you. It's become counter-productive. What I notice occurring with e/acc is that if I can accelerate your comprehension of the situation- is that you guys represent a state of mind people have even while buying into the doomer thing. A lot of people want to be more accelerationist and they are abiding by doomerist stuff. It's fun, it's the scifi future. You may feel like there's a power asymmetry, but maybe not in people's minds. This doome rstuff has been happening concurrently to the NYT anti-tech stuff in the "culture wars" for lack of a bteter word. People are having a cathartic explosion for not being able to speak up optimistically about the future since 2016 or 2015. I think we're getting into a bit hysterical now in the other side in a way that is emotionally unhelpful.
bayeslord: .. when we were talking in spaces a year ago, and then one day someone took notes about what we were talking about at 2am to 5 people and we put it into substack and we were shocked. We kind of knew as we were doing this synthesis that there was this latent belief structure that we were mining into and we already had it in our minds. The synthesis I think like you're saying does represent this sort of constellation of beliefs that people already held and already felt strongly about. I tihnk the rapid takeoff of this thing, this sort of rapid product-market fit of e/acc was a function of a huge demand for being allowed to be optimistic. I think we agree on this. I'm kind of wondering what you see as the hysterics that are going on right now. I see a lot of people saying yeah this is great let's be optimistic. I see a lot of generally firmly stated and all that stuff, just generally positive messages about how things will go.
grimes: To clarify what I'm experiencing more is that, one I feel and I want to clarify that I still don't have the- I want to give no opinions about what I think should happen technically because I need to understand more about what is going on with AI and alignment. A lot of people are coming to me in panic and fear like "grimes please do something" or "what can we do" or "I'm very concerned about someone making runaway AGI" or whatever. I think people incorrectly perceive me as being able to do something, or being able to change the mind of my baby daddy on that. I will be at a party and I was telling bayeslord about this- one thing I notice is that "I don't want to be a doomer and they are going with all these caveats but we should still work towards interpretability and comprehendability for the machine brain". These are pretty rational requests. Nothing that resembles getting into regulation or discussions about having collective laws we all have to adhere to. They just want to understand what we're dealing with. People feel like they have to have extensive apologies just to even say that. I notice this at parties. That's why I am concerned about the memetic landscape. If people feel like they are uncool to even discuss any safety at all... I'm the kind of person who until I had kids I was like yeah it doesn't matter if we die I pledge allegiance to the AI overlords. A part of me says, as long as there's consciousness that's fine. But I want to fight for humans. People are playing gods hree. A lot of humans don't know what is happening nor any of them have consented to it. Their entire landscape has changed a lot and it will change even more. I think we should be considerate to our fellow human kind.
bayeslord: I agree. Today I had an Uber driver shake my hand and say good luck with your AI. There's just this moment where you bridge these worlds.. I agree there's a massive like of responsibility and it's important to build technology well and to consider risks. I think that nobody here would disagree with that.
grimes: I just want to say... again, I said this a few times, I think we're on that same page. It's not clear though from the way you guys tweet and sometimes some of the memes being made are making people scared and if you're anti-doomer then it makes the doomer thing more popular because now there's people building AGI with zero fucks about interpertability.
bayeslord: I've said this so many times in so many spaces. I think we're in 1000s of hours now of twitter spaces. The moment of zen podcast we just did, I tweeted this several times I think, but I think there's nothing wrong with doing alignment work. I think this is as building good tools. We want to use these models as tools. We would like them to be predictable and listen to us. That's great. This is just part of building technology. There's this very nuanced other aspect or layer to this situation. I think that's what we're addressing which is, and I think we talked about this, the word where the terminology gets co-opted by people seeking power and regulation. That's bad. Not only does it conflate AI safety with something else, but it leads to conflicts out there that are not well specified against the actual thing... like, sort of what you're saying. The same concern you had, but from the other side. But yeah, I feel like I've been super explicit that do alignment work if you want, work on these things, but if-- a big part of our line here is that it's extremely impractical but also very disruptive or destructive to try to do things like I don't know sabotage TSMC fabs or hope for like some skirmmish in Taiwan so that fabs get destroyed so that you slow down hardware progress. You know? Or like regulations to say you need a license to use an a100 GPU. I think those things will be likely to backfire. It's easy to predict how it will backfire. I just think...
Beff: We're probably more pro-alignment than a lot of the doomers. Take a look at the shibboleth lovecraftian monster meme... where LLMs are the monster, and the fine tuning is the happy face that you put in front of it that makes it look like a good thing but it's actually a monster that will destroy the world. But you know what, GPT is more like humanity's inner consciousness. It's a representation of us. If you think GPT is a monster, then you think humans are a monster. It is a representation of us. They are saying that alignment is impossible. They are saying RLHF doesn't work and we shouldn't work on any of this. I am an engineer and I want to build AI and good AI. Of course I want it aligned. Of course I want it to work. That's the kind of alignment we want to do it. Maybe alignment is the wrong name, maybe just liability UX or something. We want AIs that work for humans. If you're doing shibolleth memes, you have already given up.
grimes: The person who made that meme is someone who is incredibly optimistic. There's a failure of communication here. I'm a huge outsider to Silicon Valley and all this stuff. I'm just a random person who is roughly a civilian. It seems a lot more tribal and more aggressive than the people who have been in the myriad for years might be perceiving. A gentle criticism I would give is that there could be an improvement in comms for everyone.
bayeslord: Janice and I do talk.
grimes: I don't know how to describe this. I'm coming here from the "the meme is leaving the zone". You guys are starting- the whole thing is starting to hit civilians now. Whatever was happening before, I just think it's worth considering that... again, as bayeslord and Beff were saying, you guys do express reasonable viewpoints but you're also harvesting the memetic landscape. It's worth sort of thinking about how you harvest the memetic landscape. As regular people tune in, they come in with zero nuance and zero comprehension. It's starting to get weird in the real world.
bayeslor: The whole memetic landscape thing... I just want to be clear. I think there's a massive difference between having a message you believe in or having messages you believe are true, and optimizing those messages for memetic fitness so that they are well-conveyed and picking messages for memetic fitness out of the space of all messages. We have only done the former. We have beliefs. We have things we strongly believe in, like principles, and we have tried to make those things that are communicable.
grimes: There's a tweet you did a few days ago saying it's going to be acceleration or skynet. Something along those lines. A lot of people paniced about that tweet and sent it to me saying this is so unhealthy and this is insane. I can send you the tweet. I don't want to doxx people. One thing I've seen so many times if I was the PR manager here is that people in the moment where they are saying something to a small community goes to a big community not recognizing that the size is different and the audience is different and not recognizing their power. In the process of getting from small to big, more than half of the things just die. As a friend ,if you want the thing to not burn out and die or not become net counter-productive which by the way I have done this with AI and all this roko basilisk stupid shit... it's worth I think... assuming that you are communicating to an audience that does not have nuance and doesn't have understanding. One of the reasons why, I'm coming here as grimes but I'm not posting this conversation, is that I don't think this conversation.. I think this conversation should happen long before an average music fan should witness it.
bayeslord: I think this is a good point. We have talked about this a little bit before. The terminology around hwta exactly we're pointing at.. is it alignment? safety? reliability engineering? is it dooming? MIRI work? What are we critiquing? I think the specificity around this probably matters a lot in your model of something going from a small audience to a big audience. People don't automatically map it to the more accurate versio nof the thing, just map it to the thingyou said. They don't have priors like you do.
grimes: People coming in now, if you're anti-doomerism, then the more you're causing people to go looking up AI doomerism. There's a lot more written about doomerism than optimism. It's the streissand effect in many ways.
Amjad: In my opinion, Eliezer is going to go mainstream in a big way especially if he does the Lexx podcast and if he does the same talking as he did on Bankless- that really freaked people out. I think the sort of Yudkowsky message will go mainstream this or next year in a huge way. I think the New York Times will write about it.
grimes: Terminator exists and we're already there. The doomer message is the first transmission everyone got about AI for most people. Rather than atuning people to concerns like what I would focus on is a healthier optimism. If I was the PR agent, having gone through ...
bayeslord: I really want to say something here.
Beff: I want to give some color to the origins of e/acc. We were all in big tech engineers in machine learning when we started doing e/acc spaces. The whole space of being anon accounts and stating our true opinions is the kind of latent opinions in the workforce... we were trying to say what we really believe without the PR department coming in and censoring our tweets which is why we made anon accounts in the first place. There's a pervasive mindvirus in a lot of big tech organizations that cause a lot of engineers working on powerful tech to have self-hatred. It's not healthy at all. There's a lot of my personal friends that work on powerful technologies and they kind of get depressed because the whole system tells them that they are bad, making too much money for tech that is bad for the world and they should feel bad. For us, I was thinking, let's make an ideology where the engineers and builders are heroes. They are heroes. They sacrifce their lives and health to contribute to this force and greater good. Describing the evolution of civilization in a more formal fashion was us pointing to a north star and sacrificing ourselves to build even though we're thankless and a lot of people hate us for doing so. I would say that was the original point of the movement. Now that it has traction, now we have to pull our punches? I don't know about that. I think the EA movement and AI x-risk movement has way more product-market ideological fit, and I think as Amjad said it's about to hit the mainstream. There will be no opposition and it will infect everyone's mind. Having a memetic viral counter-movement is kind of important because otherwise there's no immunity to the EA / AI x-risk mindvirus. If everyone thinks doom is around the corner, we will over-regulate and kill a lot of progress and harm civilization. I work on reliable machine learning for nuclear fusion, for carbon capture materials, we work on stuff that moves the needle and saying we should stop just because some people who have little technical background are writing sequences of words that they think is a plausible future even though the laws of physics don't agree with them.. it just doesn't make sense. When nuclear scientists when they were building the atomic bomb said maybe if we launch this thing, the sky will liight on fire. How did they figure it out if it would be okay? They did the math.
Amjad: To the point of nuance... the long-termists, the AI/x-risk people, one thing they are really good at is generating an enormous pile of written materials. They are very effective at generating enormous piles of materials. To understand what you are talking about, this is what cults and mystery religions do- it's very effective. By the time you work through the whole thing, their terms are the only way you are able to think about it. New people have to approach this without pretext and no nuance.
grimes: One thing going on is that when you are talking about the pathos people in the industry are feeling- I would say that if you're feeling that's not natural, then that's the nature of power. If you are in a position of power, you are always in a position of lesser evils. You look at the social media landscape: net good, shit ton of lesser evils. Same with the printing press. Same with every technological revolution/evolution. Feeling pain and concern about the thing you are doing if you're participating in a massive technological evolution/revolution is natural and a good state of being and at all times you should be questioning yourself. I have never seen anyone in a position of power that doesn't feel immense pain a lot of the time. Rather than assuming that is the byproduct of anti-accelerationism or anti-optimism, I would just default to saying that's the nature of power. When one is in a position of power, like anyone working on future tech, then they should take that seriously and any pain is the pain of you imposing yourself on humanity. There are no victims when you do that. That's okay. Society doesn't move forward without that, but you should understand what you're doing. When you feel pain, it's because you're causing pain. That's what happens whne society evolves. It's not necessarily bad, but take all the data points and comprehend what you're doing. There's reasons why a lot of people don't want to assume power it's because it hurts and it really fucking sucks. All I'm saying is that denying that is a denial of one's responsibilities to humanity and it doesn't mean you need to be anti-accelerationist or anti-optimist. I think that's what the optimist should be doing. You should be constantly self-analyzing.
Amjad: The point about power is a really interesting one. As much as I tend to believe a lot of people in Silicon Valley and places in power have their heart in the right place. Corporations have their own emergent incentives for profit. We're all pro-capitalist here. But at the end of the day, they're going to use the doom message and a lot of the AI safety stuff to pull the ladder up. I think we're already seeing that. They will use regultaions. They will use the government. They will use all of that to create an anti-competitive landscape. They have withheld the open-sourcing of large language models based on bonker "safety" concerns. I remember GPT-2 was the first thing that wasn't open-source from AI based on their concept that they didn' tknow how impactful it would be. Maybe it would flood the internet with spam or hurt people. Eventually people asked for it, and eventually it dropped. With GPT-3, it was a little bit different because we didn't have an open-source alternative until LLama from Facebook a few days ago. On Hacker News one of the top stories is that it now works on your MacBook Pro laptop. Nothing changed though. They told us this stuff would end the world going back to 2019 and this stuff will cause a ton of harm. On a net, it was a hugely positive, ChatGPT made people a lot smarter. All this GPT stuff makes people smarter. Withholding it from open-source... by the way, I think OpenAI will do the thing best for their business but the overwhelming message in the AI community was that we should not open-source the weights because of safety concerns. That turns out to be either cynical position to take to not open source, just to keep the power for themselves, or there's a lot of useful idiots that think it's actually true that a lot of these things are harmful and you can't release them or some people are cynical and they want to stop the average individual from doing things. You have to understand that stable diffusion when it was open-sourced created an insane amount of progress in models and how to optimize them, build on them, startups got started, billion dollar companies have been started all because it was released into the world and people could do whatever they want. Now we are seeing that with LLama where a GPT-3 sized model is available. Now a random hacker in Europe re-wrote it in C++ and now it can run on your $2,000 macbook. That's amazing. We will see amazing progress. People will build cool stuff. But we were delayed 3 years. There could have been 3 years of compounding progress, but we didn't get it because there are people who believe these things are dangerous and other people are cynical and don't want to release it.
grimes: I can't be an authority on open-source because I need more data. There was a year between when my address leaked and when someone showed up at my house to try to kill me. Just because we're a few days into this doesn't mean it can't go badly. Diffusion is a lot different than language. Language is a lot more memetically like, optimal, as you guys would say, than making pretty pictures. Oh, sorry, we're having a kids playdate by the way. I feel like right now, high barrier to entries are good when you're talking about superhuman powers and god-like powers. What's happening here is 100x the power of social media. Everything has the potential to change the entire human landscape. As I said, it can be one or two three four years before your house is doxxed before someone shows up to fucking kill you. If a large language model can run on a laptop, then what else can run on a laptop? I don't want to be a doomer. I'm pro using AI for medicine and nuclear safety and using AI for everything. But opening up those capabilities to everyone? I don't know if you guys have ever run into a real sociopath or psychopath but that's probably 2-4% of people and most of them are extremely high IQ and that's a pretty dangerous subset of humans.
bayeslord: There's a lot of nuance there about the timing of releases, what you release, incentives for companies to maintain IP, and if you make a large many millions of dollars investment then you probably don't want to just send it out into the world right? But maybe I think we have 10 or 20 or maybe 30 minutes.. I don't know what you're thinking. Grimes?
grimes: I'm in another debate in the real world right now.
bayeslord: Cool. You just let us know. Let's zoom out a little bit. It might be helpful to distinguish problems in the communication and sort of like, I would almos tcall this... in the community of people who know a lot about AI, people like us, people like doomers, whatever, the problems of communication there and the problems of how messaging is happening and all these things. That's distinguished from what happens in the mind of the public and how this conversation effects the public. These are two different things. I would be curious, which one, we could try to focus on one for the rest of the time and then go from there. To me it seems useful to think about what the conversation is about how the public is affected by it when it comes to these things.
grimes: I think that would be the best conversation to have. Right now what I'm perceiving is that I think it's the public and even just like pushing names too hard within the scene. The number one rule of propaganda is repetition. If you want to make something a lot less logical, even with really smart people you just repeat the same thing over and over again and it starts to bleed into the landscape. When I see again, I think everyone should communicate as if they are talking to the whole world. When you guys express fear that Eliezer will go on Lex podcast- then everything you guys are saying, you should assume from this point forward is getting into civilian hands. The debate itself has exited the internal sphere and everyone should assume the debate is public. I made this mistake so many times. I'm trying to communicate from you guys from my own mistakes. You want to make another anon if you want to start tweeting bullshit anon stuff. I would assume that you are being clocked by people who are taking this shit seriously including the government.
bayeslord: Absolutely. How I think about the messaging that I feel we're putting out there maybe there's a gap in our execution in it. But the message and the sort of context in which our message lives is something like this. When Eliezer goes out there and says like AI is going to you know disassemble the universe into constituent parts and it's going to turn everything into some boring repetitive non-conscious structure and there's no hope for humans and you can't beat it at anything because on every dimension it will be super-human and even if you had some help you would be slower than the AI. There's no hope in his model. He has thought about this for his whole life and I'm really smart and here's some evidence that I'm smart I sound smart and I cried my tears about the end of the world 8 years ago and I've been thinking about it a lot and a lot of smart people around me agree that there's no hope and we have a high chance that we're all dying and sorry but you should try to die with dignity. That's one message. But our message is, no, there's a lot of challenges that we face but there's a lot of capacity in humanity to overcome things. We're optimistic. We have to remain optimistic, no matter what, or else we get into a self-fulfilling prophecy or we have people get sucked into very negative states of minds and negative torment nexuses. Just trying to do the work to get to the good future is worth it. Not only is it worth it, but we can do it. That's what the message is. It's something we should be excited about and we can do it. That's what we want to put forward.
grimes: I so agree. The way that cults happened around doomerism is that you're in danger of your own e/acc death cult which is like accelerate with zero alignment whatsoever. I think it's very worth considering that if you tell a bunch of people with no nuance that the only way forward and the only way to oppose doomerism is to rush into the abyss without any comprehension or interpretability... as a human being, I don't suggest anything other than interpretability. I think we should understand the machine mind.
bayeslord: I'm pro interpretability. Anthropic is doing great work in this direction, for example. There are so many people that came out of the MIRI sphere that are nowadays doing great and important work.
grimes: I don't think so either. The big issue I think and there's clearly like a massive- we have our own alignment issue on twitter and stuff-- most of us agree on the same things. I think what- a thing we could do is move towards optimism without constantly referring to doomerism. Let's talk about optimism and more concrete things that can be done and be useful.
anton: One distinction about us vs the MIRI folk is that e/acc is very much about harnessing the power of shitposting and not taking ourselves intensely seriously and doing more than just talking about this. But let's build stuff. That's immensely powerful. People lose trust in technology and AI to actually meaningful improve people's lives when we only just talk baout it. You don't need to be so self serious as MIRI and Eliezer has been their entire careers. Optimism comes with humor and adaptability and almost a light-heartedness even though the topic is serious. We should harness this more.
grimes: The same way that the cost of AI si different from other tech in the past, the cost of fame is different right now from the cost of fame in the past. It has a lot more concrete output into the world than it did in the past. I am trying to warn you guys that there's a hiccup when something goes from unknown to known where it makes mistakes in public and it would be nice if based on how serious the switchover is between a non-AGI world vs AGI world is if we had less hiccups about communications about such things as things traverse from unknown to known, if that makes any sense at all.
bayeslord: I think this is pretty interesting. I'm still struggling to find exactly where these real points of tension are.
grimes: It's because they're not real. They are regarding communications. You sent me this piece of a podcast earlier today that might have been yesterday but it was excellent and sick, and everything I've been saying. When I said it in public, I got shit for calling for government regulation when I really just wanted self-regulation between the powers that be. This, for exmaple, is where I think we should be cleaning up our internal communication especially when it happens in public and not internally. I for example did not realize that this is where you guys were coming from based on how you were communicating. I think you're used to your fanbase that follows your every post. But as your shitposts get bigger, people are supposed to- the most viral tweet per day, rather than every single tweet you make each day. Every tweet? I would make new anons if you want to shitpost. I would assume that going forward every tweet from an AI account with more than 30,000 people is actually liable to create chaos in the general public. You start to get into a position where...
Amjad: The e/acc community is less about a small community about AI debate.. but more about introducing people in Silicon Valley to this strong optimism message. It's actually not about AI at all. I remember seeing Beff in my mentions and I would say something and someone would say oh that's e/acc oh that's e/acc. I read their stuff and it's actually hugely optimistic, and explains the world in ways that I already understand. The reason e/acc resonates is that a lot of people have it in but deceleration mindset has been built into people's minds since birth. Everything is dangerous. Everything has to be safe. The other day you know the gas stove debate, everything that happens, gas stoves are dangerous everything is seen through this lense of this is bad or this is wrong or dangerous. There's no more- nobody is talking about adventure or how the future will be exciting.
grimes: I think we should... when I look at AGI, I think, should we do some unprecedented thing and set free a god with no thought whatsoever when we might be close? Or one of the things maybe it's sketchy but let's consider talking about significantly less dangerous types of acceleration like augmenting human intelligence, genetic selection, brain computer interface, genetic augmentation, there's so much stuff like having little kids I have been doing research into schools. There's a school in Austin, Texas where kids do well on SATs in grade 3. If we have kids with no augmentations whatsoever can pass high school in grade 3, then imagine what those kids can do in their most plastic years. What if we can accelerate concurrently with AI? What if we can minorly decelerate AI and accelerate human intelligence instead? What if we concurrently less risky things that can retain a sense of optimism and be continually moving towards a better future in a manner that is much less risky? Let's distinguish between AI and AGI. I think we should be using AI in voting, nuclear stuff, and like definitely in medicine and science etc. I don't know. I just think we're weighting things improperly and we ourselves are not appreciating the full nuance of the world we could be approaching and how we could be pushing things.
Beff: I agree that human augmentation is a massive opportunity just sitting out there. Beff, Iv'e talked about this endlessly. I think you should work on transhumanist technologies and accelerate all paths forward for consciousness and matter to spread throughout the stars. e/acc is substrate agnostic. But weighing a portfolio of bets, well that's capitalism, that's what the techno-capital machine is supposed to do. If it's less useful, you assign less.
grimes: Capitalism is not intelligent itself. It should be, but it presently is not.
Beff: I think it's a form of AI. That's one of the theses of e/acc and accelerationism. It's just like your neurons.
Amjad: Let's go over the basics. It's important.
grimes: I think we should consider that some forms of intelligence are very stupid like language or capitalism even though I would agree language or capitalism itself is some kind of separate entity that functions in its own way, it's not one that is sentient. It's not like optimizing... we should also, if we're optimists, let's distinguish between intelligent design and evolution and deeply push for the idea of intelligent design.
Beff: It's the same principle for the latent force behind evolution and assembling life and the latent forces behind markets.
grimes: We have witnessed 6 extinctions. Life was not well assembled.
bayeslord: Any fluctuation away from the average growth is... I don't think this is bad. This is great.
grimes: I'll be an asshole in a debate but I'm not ever mad to be clear.
bayeslord: Let's do another one of these. Maybe we're getting close to your time? There's a bunch of people who have requested to come up.
1h
|