Cursor's been blowing up on socials the last few weeks, in no small part thanks to this video from Fay Robinett which has been viewed 2.5m times:

In a time when so many AI features/apps are falling short of the hype, what can you learn from Cursor to figure out which AI apps will win?

Three patterns:

1. They brought the LLM closer to where the user already works and closer to their data (the codebase, which is used as context).

2. There's a validity-check: does your code run (or tests pass)? If not add the stack trace back into the chat, and the LLM fixes it. This combats hallucinations.

3. There's a human in the loop. Automation usecases are underpeforming. Augmentation use cases are going great.

Full video:


And in tweet form:

3 things to learn from Cursor's popularity

Three patterns you can learn from Cursor to figure out which AI apps will win