In 1938, a radio drama adapted from H.G. Wells’ novel “War of the Worlds” caused widespread panic. The broadcast was so realistic that many listeners believed an actual Martian invasion was taking place.

The program simulated a series of breaking news bulletins that interrupted regular programming. You can listen to the original broadcast here.

New technologies, like radio at that time, could be used to create highly realistic and convincing narratives. People trusted the source. When a breaking news bulletin was used or the program was interrupted in this way, there was no reason not to believe it was true.

Meet OpenAI’s Sora

With OpenAI’s Sora, the ultra-realistic text-to-video generator that emerged this week, I couldn’t help but think about this and many other examples from the past. We could face similar challenges like in 1938.

Moving images and the sounds of voices are sources of high trust in every society. When exploited, they have the potential to rapidly spread misinformation and manipulate reality in ways that can be difficult to distinguish from the truth.

Examples are easy to create. For instance, that video of my friend calling me from a new number is real. I know that because I never had to second-guess his voice patterns before. And that video call on Telegram with my good friend is probably real too because I know that second-guessing a video is not necessary… until now.

Some of us who have been exposed to cyber attacks in the past know about social engineering and spam emails that catch our attention.

But faking other media sources has always been difficult.

The story of that man sitting in a two-hour call with the C-level team and sending millions out happened before Sora existed.

And when the first scam issues came out, my timelines were full of people saying that we need to start having security words in phone conversations to make sure we are talking to the right people.

How do I know who you are is who you are? Before AI and deepfakes, yeah, a face is a face. But now, it could be anyone on the other side.


When the rate of technological innovation is disproportionate to the ability to adapt, a period of chaos is preprogrammed until adaptation and literature about the new technology will catch up.

It took a year to see open-source models being as good as ChatGPT 3.5 Model. So, how long will it take to have a similar open-source video model, and will we have adapted by then?

Obviously, we are more advanced today. We know about social engineering and many other ways to trick humans. But Sora gives me the idea that we need to adapt a lot faster than we used to.

That does not take away from the fact that I am really excited to play around with Sora and see what it can do. Whole creative industries will be democratized.



Leave a Reply

Your email address will not be published. Required fields are marked *