594 Comments

It seems AI safety is becoming a culture war of its own:

https://www.jonstokes.com/p/ai-safety-a-technical-and-ethnographic

I remember this was previously considered a bad outcome, so I wonder if there are many people pulling their hair out right now.

Expand full comment

I know I'm way behind but I just read Mediations on Moloch and thought it was excellent. Has the "Moloch vs Elua" discourse collapsed down into AI alignment or are other threads still running? I'm particularly interested in finding people working on human coordination at any and all levels.

Expand full comment
Mar 29, 2023·edited Mar 30, 2023

I’ve noticed that several creators (writers, YouTubers) have started getting accusations that they used AI to write or even voice the work. I’ve think if Scott starts getting these insults that he should issue a ban.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

So this surprised the heck out of me: https://ustr.gov/issue-areas/economy-trade

"According to the Peterson Institute for International Economics, American real incomes are 9% higher than they would otherwise have been as a result of trade liberalizing efforts since the Second World War. In terms of the U.S. economy in 2013, that 9% represents $1.5 trillion in additional American income."

I'm stunned that number is only 9%. For as much push as I've seen (and made myself) for "free trade is good," "rising tide lifts all boats," etc, etc over my lifetime, I was really surprised the number was this low.

I mean, it's one thing to say to an unemployed Ohio factory worker "yes, you're facing personal hardship, but free trade makes us all better off," but it's quite another to say "yes, you're facing personal hardship, but free trade makes us all 9% better off."

Expand full comment

https://www.bbc.co.uk/programmes/w3ct59qb

Some interesting technology: consensus-building online for Taiwan, budget transparency for Nigeria, ease of access for Estonia (everyone gets a deal-with-the-government number, eliminating bureaucratic friction saves 5 work-days per year per person), developing a virtual Tuvalu since the islands are likely to be under water.

I'm especially interested in Taiwan's approach, so here's a link.

https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform

The idea is that democracy isn't 51% getting to lord it over 49%, it's better to look for consensus, and well-designed computer programs can help people find consensus.

Expand full comment

_What Moves the Dead_ by T. Kingfisher (Ursula Vernon) is a fine horror novel based on "the Fall of the House of Usher"-- very horrifying, very funny in spots, and of interest to rationalist because it's got some interesting speculation about a scary and probably not adequately aligned high intelligence.

Expand full comment

FLI published an open letter asking all labs to put a hiatus of 6 months on training all systems more powerful than GPT-4: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

I think it looks a bit like a big-tent thing, with the letter appealing to both people concerned about x-risk and short-term social disruption. The letter doesn't operationalize "more powerful than GPT-4" which seems like a pretty important issue, especially given that they call for government intervention in the event that the labs don't voluntarily accept.

Interesting signatories (IMO):

- Yoshua Bengio: Co-winner of Turing award for NN researcher

- Gary Marcus: b/c it seems to conflict with his generally deflationary attitude towards the capability of these systems

Here's an interesting point I heard Sam Altman make (I'm sure it's been said elsewhere first): Isn't short timeline + slow takeoff preferable to long timeline + fast takeoff? Imagine we stopped working on AI for 10 years but electricity and compute kept getting cheaper. In 10 years we'd be taking off waaaaaay faster than right now.

Expand full comment

When you are doing a school exam and you are "thinking" what is that like?

I was one of those weirdos who when given a 3 hour exam would finish in 45 minutes. I had friends that got pretty similar grades who took most of the time given. When I asked why they needed that much time/what were they doing, they always just said "thinking, of course!". I never really pushed for more info, because I didn't want to come across as "that guy". But it's been a decade since I graduated and I am still curious about this.

For me: I have an inner-monologue that is always monologuing. In a test, it wouldn't be doing much useful towards answering questions. Usually it would be monologuing about how uncomfortable the chair was, the temperature of the room, or keeping track of how much time has past. The most useful thing it could do is to remind me to double check for the mistakes I tended to make. When it came to answering the exam questions, it seemed that I would somehow "just know" what the next step I needed to perform was. When I was done writing that down on the paper, I would "just know" the next step. And so on. This meant that I could basically sit down and write as fast as I could until I was done.

What was it like for you? What was going on in your head when doing an exam? If you also have an inner-monologue, did it help with problem solving?

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

Anyone who watches international news will be aware of the nationwide riots currently happening in France, especially Paris, over President Macron's initiative to raise the pension age from 62 to 64

Looking at video footage, it appears the vast majority of rioters are young people, as one would expect. But for most normal people under the age of 30, pensions are about as relevant as the far side of the moon. I know it was for me at that age. So what possible grievance have they with the policy when it won't affect most of them for decades?

It isn't as if it will raise levels of tax, quite the opposite (in intent anyway). So unless the rioters feel old timers retained in the jobs market for longer will compete with younger people for jobs, I presume it isn't really about pensions at all but just an excuse to let off steam and hone their barricade building and petrol bomb throwing skills!

Expand full comment

If you have read Unsong, especially the Broadcast, and have felt that that changed the way you relate to the gargantuan amount of evil and suffering in the world, how has that done so? What are you doing differently in your life?

I think the book gives us four responses to the problem (with two halves to each response – the Blakean parallel thing): 1) the Comet King/Robin West: "somebody has to and no one else will", 2) Aaron Smith-Teller/Ana Thurmond: intellectual revolution or "Marx didn't hand out flyers either", 3) Dylan Alvarez/Erica Lowry: go berserk consequentialist, and 4) Uriel/Sohu: disaster relief or attending to the broken infrastructure of the world. Sarah and THARMAS don't count.

Which do you think is the best response, and which do you think Scott was advocating for in Unsong? Are Comet King-style plans a good idea?

Expand full comment

Has anyone found a reliable way of distinguishing human-written text from text written by GPT-3.5/4?

I used to find that human-written text was generally easily distinguishable because AIs couldn't stay on topic for extended periods, but that's clearly not the case anymore. AIs can still be induced to make basic reasoning errors that would be unlikely for a human, but it takes some work to get them to do that nowadays; it's not just something that they do by default.

Expand full comment

If anyone read Peter Ziehan's 'The End of the World is Just the Beginning' and got real worried and thinks we're all doomed, I might have a cure for what ails you. I just wrote up my thinking for why his argument is wrong and the global order is not about to fall apart: https://medium.com/@bobert93/contra-ziehan-on-the-world-being-doomed-3f94368314c0

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

It seems inevitable to me that, at the present rate of progress, AGI will come to fruition sooner or later, and in most respects it will inevitably reflect human nature in character, with humans' flaws and virtues.

So I reckon the most likely way to maximise the chance of AGI safety is to ensure that it is an "average" ensemble of attitudes distilled from many human minds, besides its superior and faster intellect of course, and not trained and based on one or a small group of humans whose attitudes and ambitions were abnormal and quite possibly in part pathological. So in short, when training an AGI, democracy should be the watchword, safety in numbers!

After all, serial killers or doomsday cultists, or others with insatiable destructive passions, are very much a minority, and on average people are mostly fairly laid back and content, albeit of course with self-survival instincts that might be worrying when incorporated in an AGI, and base instincts which most of us share in varying degrees, such as greed or lust, are not applicable as AGI attributes.

Expand full comment

Recently there's been a lot of discussions of AI risk due to the explosion in LLM development, especially by EY and other AI Alignment people. Which of the following statements is closest to being true about these discussions?

1. Alignment people genuinely believe that GPT-X (or equivalent LLM-based models) can lead us directly to AGI and are thus ringing the alarm bells before it's too late.

2. They don't think GPT-X will lead to AGI but think we're setting up a bad precedent by moving so quickly with LLMs and therefore sound the alarm to setup a precedent. This doesn't matter for GPT-X type of tech but would matter for some other, yet to be discovered, technology.

3. The explosion of interest is a great opportunity to obtain funding for AI Alignment research, so they're ringing the alarm bells primarily as a fundraising opportunity

4. No one knows whether or not LLMs are actually dangerous and there's no deep strategizing going on in the background. All the reactions are just standard instinctive reactions to major AI developments.

I'm leaning towards #2 for highly knowledgeable people such as EY and #4 for people who only have cursory knowledge about the problem. What's the real answer?

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

There's been lots of discussion and pieces written on LLMs lately, so let me throw mine into the mix. I respond to dismissive criticisms and offer a positive argument in favor of LLM understanding. I also talk about some implications for humanity and society. There's also good information added in the comments.

https://www.reddit.com/r/naturalism/comments/1236vzf/on_large_language_models_and_understanding/

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Like many others, I've been reading and thinking a lot recently about AI doom scenarios. I find a lot of the claims made in these doom scenarios about how the AI escapes control or exterminates humanity intuitively implausible, but I wanted to see if I could clarify my thinking on these and form a better idea of how likely different scenarios are. This led me to think about what capabilities a general superintelligence (henceforth GSI) would have and how it could affect progress in various areas. I don't have a blog or anything and it's not a lot anyway, but I wanted to share what I came up with and this seems like a good place for it.

By general intelligence, I here mean the capability to grasp arbitrary information and draw inferences from it. For instance, learning the rules of chess doesn't take much intelligence, nor does knowing the locations of the pieces in a given game state. But being able to infer from these locations what is a probable winning move takes more intelligence. The more intelligence you have, the stronger the moves you can find. You may find these best moves by being able to "roll out" many long sequences of moves, or by developing better heuristics about what moves are good in what situations; either way, we'll call this intelligence. GSI is just a matter of substantially greater degree. In our chess example, a GSI would be able to consistently find much stronger moves than any human player analysing the same board, after a comparable amount of experience with chess. By definition, this capability extends beyond chess to any problem we might imagine. There are legitimate questions of whether truly general intelligence is possible, or whether advancing narrow intelligence past a certain point requires sacrificing generality, but for the sake of this post I'll assume that it is, and it doesn't.

However, intelligence is only one factor in solving problems. Two others are data and power. Chess is a kind of problem that is bottlenecked by intelligence. Both players have access to the same data (the full state of the board) and the same power (the set of available pieces and the moves they can make with them). We could change this, adding a power bottleneck for one player by giving them only a king and the opponent a full set of pieces. In this case, GSI will be of little use - even a relative novice could beat Stockfish most of the time in this scenario. Or we could add a data bottleneck by hiding most of the game state from one player, maybe showing them only the locations of their own pieces.

So I can speculate about which factors (intelligence, data, and power) are the bottlenecks in various areas or specific problems, and this may give us a sense of how much help / danger a GSI would be in those areas. Of course I acknowledge that these factors often interact - we can sometimes use power to obtain data, or intelligence to obtain power, etc. Hopefully others can share their thoughts and correct obvious errors or blind spots in the below.

Fundamental physics: right now, it seems to be mainly bound by data / power. We have plenty of theories about how to unify quantum mechanics and general relativity, but the experiments needed to test them are way beyond our physical reach. We would need far bigger accelerators than we can build or power, for example, to gather the needed data. So we should not expect progress in physics to be accelerated much by GSI.

Microbiology & biotech: Here there is ample data and plenty of power to conduct experiments. But biological systems are incredibly complex with many moving parts; progress is plausibly limited by the ability of an individual biologist to hold these parts and their dynamics in their head. So GSI may accelerate this a great deal.

Nanotechnology: Unclear. Potentially GSI could accelerate progress a great deal, if experimentation could be automated and made to take place very quickly. But depending on the application, experiments might necessarily be quite slow to conduct and observe the effects. Also, the physical limits of what is possible here are largely unknown, and may prove to be very limited. Are remote-controlled diamondium nano-assassins alluded to by Yudkowsky even possible in theory? We can only guess. Still, this uncertainty should give us reason to worry.

Psychological control: Here I'm talking about the ability to manipulate an individual person's actions by observing them and communicating normally with them, without any kind of brain-machine interface. This one is relevant to the likelihood that a "boxed" AI could persuade its handlers to release it. This strikes me as being heavily data-bound. Only limited and noisy information about a person's inner state is ever available, so most relevant data is hidden, and the controller's power through the slow, coarse method of speech is more limited still. And on top of that, minds appear to be chaotic systems, like the weather. These systems defy prediction because of their extreme sensitivity to starting conditions; even with a perfect simulator, a tiny error in starting data can throw predictions completely off. The purported outcomes of a handful of online role-playing games (https://www.yudkowsky.net/singularity/aibox) notwithstanding, a GSI probably can't do much better here than the most adept human manipulators. Of course, that means it's far from impossible. But given a savvy subject, I think it would remain very difficult.

Political control: Here I mean the idea that a government with access to a GSI, or a GSI in a position of political power, could "lock in" its regime for all time by essentially out-gaming any internal threat to its hegemony (we'll ignore external threats here). For essentially the same reasons as in psychological control, I think this is fundamentally data-limited: a polity is also most likely a chaotic system, so increasing intelligence will tend to yield rapidly diminishing returns.

And that's all I've got so far. I'm very interested to hear other people's thoughts and critiques.

EDIT: I just saw someone posted this link in an earlier comment: https://betterwithout.ai/radical-progress-without-AI A quick look indicates this covers similar ground in much greater depth. I'll have to give it a read.

Expand full comment

Hi Scott (or anyone who takes Scott's position in The Media Very Rarely Lies). I'm sympathetic to your position on the media, but...

I was thinking the other day about fictional depiction of real events. The Crown (Netflix) in particular has come under criticism in the UK for mixing fact and fiction. Two questions: does this count as "the media" and does it count as "lying"? Is it the media? Netflix is also in the documentary game, there are some Diana documentaries on there. Many companies produce both journalistic content and fictional content. Is it lying? Pure fiction isn't lying but sticking alternative facts into a supposedly true story looks awfully like lying to me. And when presented alongside actual journalism on the same platform, it enables viewers to jumble up fact and fiction in the desired way, with plausible deniability for the company (because the documentary obeys journalistic law and professional standards, and the fictional account is just a fun story). Am I being unfair?

Expand full comment

Here's GPT-4 playing go. It loses pretty badly. It plays pretty well though, y'know for an LLM that was never explicitly trained to play go.

https://ethnn.substack.com/p/i-played-a-game-of-go-against-gpt

Expand full comment

A question about a possible GPT3/4 use case:

I'm learning German and am struggling to find media to consume that is at the sweet spot of competency for me where it's easy enough to read/listen to but challenging enough that it's stretching my abilities.

I'm wondering if I could feed my vocab list (somewhere between 800-1000 words; more if you count tenses and declensions) into GPT and ask it to write me short stories that mostly used my vocab and limiting to 5-10% new vocab.

Is this something that GPT would be decently succesful at?

Expand full comment

Alignement, a short story cowritten with ChatGPT: https://nestordemeure.github.io/writing/fiction/alignement/

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

A professor of mine told me it is best to not try to get a PhD degree in philosophy if originality and creativity is my concern. As the reason, the way I write philosophy (aphorisms, rich metaphors and literary devices) would be seen as non-academic wordplays and stuff. Is this narrow overview holds true for the most? I do care about writing but it is not comparable to my want to teach philosophy. Is this problem somewhat relatable to anyone?

Expand full comment

Recently updated my Scott-inspired retail pharmacy explainer, updated from the late-2019 original to include a couple bits re: COVID et al., available at https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid

Doubled my barely-double-digit readership last time I posted here so I wanted to give it one more go.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Is AI devaluing introverts? When I was growing up, it became clear that extroverted people find it easier to deal with social situations and achieve success in society. I consoled myself with the belief that I had somewhat stronger "analytical skills" and that success was still achievable through them, however it seems that human-level analytical skills are quickly becoming obsolete. Social activities will continue unaffected, and the path to social advancement will be closely linked to one's ability to vibe, in which case introverts are out of luck long term.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Is there any work looking at LLMs with classical neuroscience methods as they're being trained? For example, I would not be surprised if GPT-4 has the equivalent of "place cells" - nodes or ensembles that consistently light up when it tries to predict in the vicinity of a certain point in semantic space

Expand full comment

In all this recent and topical discussion about AI and AGI, I haven't seen any mention of an obvious angle, namely of using AGI to monitor and limit other AGI. "Set a thief to catch a thief" so to speak (assuming there is a fear of the AGI being watched going rogue).

Some thought should be given to protocols and rules that would allow this to happen safely, without AGIs being able to deceive or traduce other AGIs (or humans). It might still be a risk though, because AGIs in competition or conflict with each other would leave humans vulnerably in the middle, like sparrows hopping about between fighting eagles!

Expand full comment

Bringing this over from the private thread:

----------------------------------------------------------------------------------------------------------------

I'm seriously questioning the decision to disable likes in the comments. If it were a downvote/upvote system, I can get it, because then it's seriously easy to dogpile by just downvoting. But on here, all we can do is give each other little hearts. I often wish I could give someone a little heart, as I sometimes just approve of a comment without having anything to add to it. What's so wrong about that?

-----------------------------------------------------------------------------------------------------------------

Here's a survey to see how we feel about the little hearts.

https://docs.google.com/forms/d/e/1FAIpQLSclTb8vHr03cUHkgFplaUKjk6kDvyIidfHt4rZuJPi2kv6hng/viewform?usp=sf_link

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Hi everyone! I'm on a quest to learn more about group rationality. I'm especially interested in incentives that can help people achieve a common goal when individually, the group members have secondary goals that can interfere with the common goal.

Does anyone here have suggestions where I can read up on that? I don't care much about the medium – anything that you can think of is fine (books, papers, videos, forum posts, ...).

I'm already aware of the resources on the following list: https://yulialearningstuff.substack.com/p/books-for-my-group-rationality-quest

Expand full comment

I found this critique of exercise depression studies interesting:

https://twitter.com/GidMK/status/1640217437898694656?t=6OxWEI1Nc7k8xt0pCJ0EIQ&s=19

Caveat, I haven't done an independent lit review to be able to vouch for the conclusion but seems like an interesting jumping off point on the question if anybody is curious.

Expand full comment

Does anyone know of any research measuring to what extent the "East Asian advantage" confounded by test prepping? Also, if measured IQs might be a less reliable approximation of "g" in East Asians?

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

https://twitter.com/nearcyan/status/1640094958282588160

Am I the only one made a little uncomfortable by how quickly we're seeing culture war tropes in AI-don't-kill-everyone-ism? The original post here is fine, but the responses from OP and associates...

...For a faction that spends a lot of time ridiculing the more ridiculous factions in the culture war, it's alarming how quick they are to emulate the "Anyone whose actions don't agree with my faction's extreme fringe minority view[1] of what we should be doing is ontologically Evil. Those actions are comparable to murder" thing that the worst wokists and fundies do? Especially in response to something that is

a) truly inevitable (someone *was* going to do it for GPT4 sooner rather than later)

b) not even something all AIDKEists agree on (most don't think GPT4 is AGI, and no one can agree on whether we should be focussing on encouraging fire alarms, given that slowing capabilities is totally outside the overton window without it right now)

c) Not self-evidently that harmful (an open-ended "it's fine now but, think of how this might set precedents for totally different situations in the future" feels dangerously similar the same bad slippery slope argument that bad faith wokists use to claim that e.g. people using the word "field" in college sets precedents that lead to for naziism). Writing code to let a definitely-not-AGI out of the box is categorically not the same level of evil as letting an AGI out, and it's an error to equate the two.

Like, this especially scares me because I've seen what sort of culture forms when you have people in a political faction whose ideas can be mocked. If AIDKEism lets itself get dragged into the culture war, and we get the same sort of reflexive anti-activism we see with low-information anti-SJW types, and Yud is right? That's game over for the species.

(sorry if this isn't super coherent, it's 2:30 in the morning and I couldn't sleep until I wrote some version of this out)

[1]Which to be clear, AIDKEism currently is, regardless of how obviously true it is to many rationalists. If we want to evolve it past that, it seems pretty critical that we "be nice, at least until we can coordinate meanness", to use a Scott-ism.

Expand full comment

beowulf888, who posts here fairly often, has been posting a weekly Twitter thread diary of SARS2/COVID-19 developments in the US since early January. He's smart, fair-minded and succinct -- sort of like Zvi but more careful about details and much less brusque (and of of course briefer). His latest is here: https://mobile.twitter.com/beowulf888/status/1639684995500638208

Expand full comment

Possibly a strech, but what the hell, I'll ask anyway: I've heard Worm compared to Homestuck, as the nearest closest thing that isn't chock-full of multimedia bells and whistles. Other people who are merely scifi/ratfic fans in general dunk on it as overrated, though. So, ideally answered by someone who's read both: worth my time? I'm caught up on all other ongoing serials, and find myself with a dearth of good longform fiction.

Other long serials I've enjoyed, to various degrees: Mother of Learning, anything by Alexander Wales (hi!), HPMOR, Harry Potter and the Natural 20, A Hero's War, The Flower That Bloomed Nowhere, Project Lawful, There Is No Antimemetics Division, Friendship Is Optimal.

Expand full comment

Could an AI "cheat" on getting satisfaction from completion of its goals? Like create a computer virus to infect itself so that it gets the satisfaction of completing "make widgets" without having to actually make any widgets beyond what is necessary to avoid suspicion that it's cheating?

Expand full comment

The government should be able to use taxes and cash welfares to create the level of inequality it wants. Then can't we just fix it to some level(like 0.3 Gini), and automatically adjust the redistribution rate to match it? This seems like a much easier solution than complaining about increasing inequality only after some fundamental technological or cultural shift has occurred.

Expand full comment

So I've been reading some about land value taxes, and I keep seeing the argument made that these taxes are good because, effectively, they don't discourage the production of some socially valuable good. I see a lot of arguments being made that, e.g., "if you tax [production of object] you'll get less [object], but the supply of land is inelastic so you won't get less of it by taxing it". Maybe real LVT people see this as a simplistic argument, but I have seen it get made by a lot of LVT proponents.

I am very confused by this argument. If you tax land value, you obviously do get less of something. You get less ownership of land. Is Georgism explicitly a program that aims to incentivize renting over ownership? I don't see it getting presented that way (which would probably be an incredibly unpopular presentation), but is that what Georgism is actually supposed to be and I just haven't seen any Georgists clearly come out and state "we want fewer people to own land and more people to be renters" ?

Relatedly, would Yimbyism get more traction if we abolished property taxes? Unlike income taxes (people can adjust their income with an eye for what tax bracket they fall into) and sales taxes (people can reduce consumption overall), people can't easily control the cost that property tax imposes on them by adjusting their habits. Hence rising property values present a real economic stress that makes people feel powerless and unfairly treated, and creates a perverse incentive to oppose amenity development.

Expand full comment

Why is it that people are just concerned about unaligned AGI? If you could control the behavior of an AGI then is there not another promblem of how this would warp realpolitik lvl reality?

The Wizard whom The Genie serves could become quite powerful...

Expand full comment

What are your current contrarian predictions for the next 5 years?

[Open question to all, not just Scott]

Expand full comment

I feel like it's underdiscussed that China has apparently not had massively negative effects from ceasing Covid Zero. As far as we know, they have not had a tremendous wave of sickness & death. Obviously the CCP is highly incentivized to lie, but China is very integrated into the global economy- there are lots of Westerners who live there, have VPNs to use social media, businessmen fly in & out regularly, etc. If millions of Chinese were now dying of Covid after they lifted NPI restrictions, we'd know about it on some level, and apparently this.... just hasn't happened.

Feels like this is a massive indictment of NPIs like distancing and lockdowns? Isn't this as close to a natural experiment as possible, proving that they simply aren't very effective?

Expand full comment

Hello all, I am a young engineering student that has recently been trying to get into classic literature and I would greatly appreciate recommendations! I’ve been reading mostly fantasy my whole life (Sanderson, Rothfuss, Tolkien style stuff), but recently I have wanted to deepen my knowledge of the literary canon and the humanities more broadly.

Since starting my classics journey a couple months ago I’ve read Hemingway, Dostoevsky, Orwell, Rilke, Camus and several others to varying success. I sometimes feel like I’m just blindly throwing darts at the board and trying land on something valuable, so I’d appreciate any wisdom or direction on the subject.

I’ve been thinking about picking some Jane Austen or Oscar Wilde but I’m not super attached to the idea. Let me know!

Expand full comment

Well it’s been just over three weeks on quetiapine/seroquel, I’m now on 200mg a day.

Other than as a tranquilizer at bedtime this stuff is USELESS!

The first two weekend days gave me some more energy, but once I went back to work on Monday I felt even more lethargic in the morning, but a little more energetic in the early afternoon,

Moodwise I still ruminate on my (lack) of a fully romantic relationship and unhappy marriage and get tearful or angry.

The pills don’t work.

I have an upcoming telephone appointment scheduled on May 15th with the psychiatrist who prescribed me these pills, I suppose I’ll keep taking them until then to give them a fair shake but I’m not optimistic.

This will be the fifth anti-depressant I’ve tried (likely) without success (though one, bupropion, did work to diminish my sadness, but it was at a price of greatly increasing my anger).

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Software engineer considering a mild career pivot. Inspired by my own struggle on the anxiety spectrum, I want to build one or more open-source web apps that support the user's mental health, and apply whatever research exists on effective interventions.

Some ideas: help folks understand their mental state over time ("How do you feel this evening?"). Help folks build good habits with mild gamification: user creates a morning routine checklist, and builds a streak each day they check all the boxes. Help folks break undesired habits: "repsonse plans" for when they (e.g.) procrastinate at work, eat 7 slices of pizza, or have a panic attack. In each case, the app offers user-customizable guidance to help break the habit. You see how often you engage in the bad habit, hopefully decreasing with time. Maybe also guided meditations and breathing exercises. Maybe long-term goal setting and tracking progress toward them. Maybe someday, a "multi-player" aspect where you and a loved one can follow each others' progress.

A lot of proprietary apps do things like this, but I want to build it as open-source, local-first software (https://www.inkandswitch.com/local-first): "you own the data, in spite of the cloud". Probably starting as a progressive web application (PWA). The project would be open to public contributions.

The initial audience might be vaguely-sophisticated nerds who want these features without bigcorp surveillance, and perhaps who want to export all the data for their own analysis. But the big goal is to build a highly usable solution that everyone can benefit from, mildly analogous to Signal (the messaging app).

Is anyone working on approximately this? Who should I talk to?

Expand full comment

It seems like predictionbook.com has been down for the last several days- does anyone here know anything about that? Is it likely to be fixed at some point or gone for good?

Expand full comment

Has anyone else noticed that TED talks suddenly are not a thing anymore? Wrote a little piece about it here --> https://fictitious.substack.com/p/where-did-all-the-ted-talks-go

Expand full comment

First: It's very frustrating when people talk about some problem or topic being hard, as it discourages many people from engaging with the domain in the first place. Some problems are only hard until someone figures it out, then it suddenly becomes common sense and obvious. Which is also frustrating because people forget what it was like to not have the answer in the first place. It only takes a single person to figure out the problem and share the answer. Become harder than the problems you are facing and scratch them up.

Second: a lot of people are critical and negative towards Lex Fridman and his podcast, in particular they criticize his interviewing capabilities. Where are the alternatives? There have to be other tech people that are reasonably well connected and charismatic interviewers. I'm normally pretty positive towards Lex, but his latest interview with Sam Altman was very disappointing and shallow. I'd love for an interview that digs deeper and asks much harder questions than what was presented. Unfortunately, I'm imagining that anyone who would be willing and able to generate deeper and more challenging interviews would also be unlikely to be able to get an interview in the first place.

Expand full comment
founding

I'm planning on visiting Europe starting with Berlin on the 26th of April, looking for suggestions/invitations for cool places and activities. Currently leaning toward making my way down to Italy via train or car but easily swayed.

Expand full comment

I will be visiting Ghana, Togo and Benin for two weeks in May. Any SSCers there? Any recommendations?

Expand full comment

Asking questions in the comment section is the worst way to poll this particular audience, but here's a question about AI providing psychotherapy.

For those who have never been interested in a therapy with a human, would you find therapy with something like a specially RLHFed ChatGPT more or less attractive?

For those who have done therapy with humans, do you think you could be interested in therapy with an AI? Why or why not?

Expand full comment

A piece arguing against the certainty of an AI immersed future and for the potential of a Luddite-adjacent movement to push back enough to remind us of why we might wanna stay human.

https://kyleimes.substack.com/p/is-something-like-a-successful-neo

Expand full comment

Deiseach fights Vinay Gupta image:

https://www.datasecretslox.com/index.php/topic,9003.msg371749.html#msg371749

Expand full comment
deletedMay 20, 2023·edited May 20, 2023
Comment deleted
Expand full comment
Comment deleted
Expand full comment
deletedMar 28, 2023·edited Mar 28, 2023
Comment deleted
Expand full comment
Comment deleted
Expand full comment
deletedMar 27, 2023·edited Mar 27, 2023
Comment deleted
Expand full comment