I enjoy reading blog posts by Tyler Cowen not because I always agree with him (I often don’t!) but because he has access to a huge network of people who send him interesting things to share. In effect, I think he’s a great curator of interesting and challenging ideas.

This is one of those things – it’s a real rabbit warren of amazement if you’ve time to follow it all through:

I am increasingly convinced that Scott Alexander was right that NLP and human language might boostrap a general intelligence. A rough criteria for AGI might be something like (i) pass the Turing test, and (ii) solve general problems; the GPT-3-AI-Dungeon examples above appear to accomplish preliminary versions of both.

GPT was published in June 2018, GPT-2 in February 2019, GPT-3 in May 2020.

As best I can tell GPT -> GPT2 was ~10x increase in parameters over ~8 months, and GPT2 -> GPT3 was ~100x increase of parameters over ~14 months. Any number of naive projections puts a much more powerful release happening over the next ~1-2yrs, and I also know that GPT-3 isn’t necessarily the most powerful NLP AI (perhaps rather the most popularly known.)

When future AI textbooks are written, I could easily imagine them citing 2020 or 2021 as years when preliminary AGI first emerged,. This is very different than my own previous personal forecasts for AGI emerging in something like 20-50 years…

GPT-3, etc.