The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.
Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.
While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.
Has GPT-4 changed anyone else's mind on this?
https://www.youtube.com/watch?v=qbIk7-JPB2c