AI in Action: Recreating Apple's Iconic "1984" Ad in 4 Hours
Fresh off some amazing advancements in the AI Video Space (Google's release of VEO 2, in particular), I attempted to see what it would take to re-create an iconic piece of content using AI tools. Beyond the initial exercise of generating something fairly generic using unconnected AI generated b-roll, I wanted to test how well these tools could be used to create something with a story driven narrative and a well-defined structure and vision.
For this exercise, I decided to see if I could re-create (or re-imagine) Apple's famous "1984" Super Bowl ad, a truly groundbreaking 1 minute long spot that introduced the Apple Macintosh computer to the world. Directed by Ridley Scott (Alien, Gladiator, Thelma & Louise, etc.) on a production budget of $900,000 (around $3,000,000 in 2025), the ad went on to win numerous awards as one of the most impactful and influential advertisements of all time.
Here's the original:
Here's my "reimagined" version:
Here's a step-by-step guide on how I did it:
Breaking Down the Original Shot-by-Shot
In order to recreate the commercial as faithfully as possible, I took screenshots of each scene from a high-definition version of it. This allowed me to clearly define the shot-list I'd be looking to build via AI.

Crafting Descriptive Prompts
Effective prompting is key when attempting to generate intentional content using AI. As we're seeing that AI is proving to be an incredible tool at quickly generating useful starting prompts, I decided to use AI to generate detailed descriptions of each image. While I could have used any standard, general purpose GPT chatbot, I instead decided to play with the not-yet-released Google Whisk image tool (designed to lower the prompting requirements for image generation) as I noticed that it would generate detailed descriptions of any image that you uploaded to it. This was a bit of a hack as this is definitely not the intended use case for the tool, but I liked the ease of the drag-and-drop interface and it served this purpose well, giving me detailed descriptions of each of the scene images I created earlier.

Generating Core Video Content
With starting prompts created for each of the reference images, I then went to Google's brand new Veo2 AI video platform to see if I could recreate the scenes from scratch. As Veo has 2 different modes (Text to Video and Text to Image to Video) I attempted to use both to generate the content. The initial output using the AI generated "base" prompts were, unfortunately, nothing like the original footage.
For example, the initial scene:

Which generated the following description from Whisk:
"Close-up view of a dark-colored surface with the number "14" prominently displayed in light-colored numerals. A translucent, cylindrical object, appearing to be a handle or bar, extends diagonally across the frame. The cylinder has indiscernible markings etched or imprinted along its length. The background features a dark, ribbed or grooved texture, possibly metallic, that extends out of focus. The lighting is dim, casting shadows and creating a somewhat moody atmosphere. The overall color palette is dark, with grays and blacks dominating."
Came out like this:

Interesting, very cool footage but nothing like the original scene. I then made numerous tweaks to the prompt to try to generate output that would be similar to the original. After 10 or so iterations I was able to come up with the following version of the prompt:
"Distant view from within a dark-colored round vertical elevator shaft. In the middle of the shaft there's a dark rectangual building with the number "14" prominently displayed in light-colored numerals at the top. A semi-translucent, cylindrical object, appearing to be a dirty glass tunnel, extends diagonally across from the one side of the building on the right side of the image to the shaft on the near left side. This cylinder is below the viewer in the distance. The cylinder is filled with smoke inside it and has rows of identical bald men dressed in gray walking along its length. The background features a dark, ribbed or grooved texture, possibly metallic, that extends out of focus. The lighting is dim, casting shadows and creating a somewhat moody atmosphere. The overall color palette is dark, with grays and blacks dominating."

This gave me a much closer output. That said, I quickly realized that this would ultimately have to be more of a 're-imagining' of the Ad vs a true 're-creation'.
4. Building the "Big Brother" Character
As the Big Brother character displayed in the large monitor needed to be synched with to the speech in the original video, I decided to use an AI avatar to create this clip. This was a 3 step process:
Cleaning up the audio. Because the original audio clip mixed the industrial sound effects with the speech, it would be very difficult to have the avatar lip-synch accurately. To solve this I ran the audio file from the original clip through ElevenLabs' Voice Isolator tool:
Creating the Big Brother image. Following the steps I used to create the video clips, I took the AI description from Google Whisk and used Google Imagen to create a number of potential Big Brother portraits. As before, I had to refine and tweak the default prompt to come up with a figure who fit the Big Brother character to match the commercial. As before, I was unable to truly match the exact actor in the commercial but had to come up with someone who fit the overall stern authoritarian demeanor.
Generating a Big Brother Avatar. Lastly, I used Hey Gen's technology to turn the Big Brother image into an avatar synched up to the audio from the cleaned up commercial audio file. This was the result:
5. Assembling the Final Output and Adding Effects and Close
I took the newly generated clips from Veo2 and assembled them in CapCut - editing the length of each shot so that it aligned with the original clip.

To incorporate the Big Brother avatar I composited the video onto generated footage of the main viewing hallway in the piece and then added overlay effects and titles to make it seem as if he was being projected from a large retro monitor with a HUD display. I also adjusted the brightness and contrast as the hammer hits the screen to mimic the feeling of an explosion happening.
Lastly, I used Google Gemini to craft a short outro script, updating the original end-tag line to align the message with 2025 and The Authentic.AI. I then fed this script into ElevenLabs and generated an audio clip with an AI narrator delivering this outro. This was then added into the end of the video in CapCut.
Key Takeaways:
Prompting for perspective and direction is quite difficult. No matter how many times I prompted the tool to create a scene with the viewer perspective a certain distance above the tunnel and the tunnel seen at a certain angle, most of the output would still be generated with a straight on view.
The Text-Image-Video approach was much better for quick prototyping and prompt refinement. Generating images quickly before turning them into videos allowed me to very quickly see if my prompts were in the ballpark and to tweak them prior to the time-intensive process of creating the videos.
Video hallucinations are quite strange and often reflect the source footage that the AI has been trained on to create each clip. For example, during my creation of the athlete with the hammer, I would occasionally get video clips pop-up that were totally unrelated to the intended output, such as a woman in a wedding dress twirling in a meadow. These would suddenly appear in the middle of the clips that I was generating.
Conclusion
This experiment was a powerful reminder that while AI can replicate, it can't replace true creative vision. Recreating "1984" illuminated the brilliance of the original, forcing me to appreciate the human ingenuity behind its concept and execution. AI video generation tools are evolving at an incredible pace, yet they serve to amplify, not diminish, the role of the creator. As we venture into this new era of filmmaking, the ability to conceive original ideas and tell compelling stories remains paramount. AI is not the future of creativity, it's the tool that will help us realize its full potential.