As part of our Allied Labs initiative, we've embarked on an exciting journey into the world of AI-assisted 3D object generation and environment creation. To put these cutting-edge technologies through their paces, we've crafted a mock experiential activation project - a country music festival brand experience. This approach allows us to thoroughly explore and test these tools in a practical, real-world scenario. Our goal isn't just about evolving our process; it's about building robust capabilities with technologies that are reshaping the creative landscape. By pushing the boundaries of how we use AI and other technologies to visualize and present experiential and event concepts, we're preparing to offer our clients more innovative and immersive solutions to their marketing challenges.
For this exploration, we chose to create a branded activation for a hypothetical music streaming service at a major country music festival. Our fictional client, a leading digital music platform, 'Beatnest' aims to showcase its unique features and connect with country music fans in an immersive, engaging way. This activation would include multiple interactive zones: an eye-catching entrance, a livestream area where attendees can experience the platform's real-time streaming quality, a chill-out zone for relaxation and device charging, and an innovative VR booth for virtual concert experiences. By choosing this scenario, we're able to test our AI-assisted visualization tools across a range of creative challenges, from designing rustic-yet-modern structures to crafting engaging, tech-forward interactive spaces - all within the vibrant, dynamic context of a bustling music festival.
The Technology Landscape
Before diving into our experiment, it's important to understand the current state of the technologies we're exploring. This landscape is rapidly evolving, with each tool offering unique capabilities and challenges:
- 3D Object Generation: AI-powered 3D object generation has made impressive strides. Tools like Meshy and 3DFY can create detailed 3D models from text descriptions or 2D images. However, limitations persist in accurately interpreting complex structures or unconventional designs. While these tools excel at generating standard objects, they often struggle with highly specific or bespoke items.
- 3D Environment Generation: The creation of full 3D environments using AI is still in its infancy. While tools like Skybox AI can generate impressive 360-degree panoramas, true 3D environment generation remains a challenge. The output is often more suitable for concept visualization than for production-ready environments. However, the potential for rapid prototyping and ideation is significant.
- Unreal Engine: Epic Games' Unreal Engine represents the pinnacle of real-time 3D creation platforms. It's a mature, highly capable tool used across industries from game development to film production and architectural visualization. While it offers unparalleled capabilities for creating interactive 3D environments, it also comes with a steeper learning curve. Mastering Unreal Engine requires significant time investment and technical expertise.
- Pixel Streaming: This cutting-edge technology allows high-quality, interactive 3D experiences to be streamed directly to web browsers. It's revolutionizing how we can deliver complex 3D environments and applications, making them accessible on a wide range of devices without the need for powerful local hardware. This opens up new possibilities for creating immersive brand experiences that can be easily shared and accessed.
Understanding this technological context is crucial as we explore how these tools can be integrated into our creative process. Each offers unique advantages and challenges, and our goal is to leverage their strengths while navigating their limitations to enhance our capabilities.
Our Exploration Process
Step 1: Mood Board
To kick things off, we created a mood board that captured the essence of country music festivals and brand activations. This visual foundation helped us align our AI experiments with the desired aesthetic and atmosphere.
Step 2: Initial Visualization
We then used OpenAI's GPT4o and Dall-E to generate descriptions and 2D visualizations of our activation elements. It's worth noting that we made small adjustments to the generated images via Adobe Photoshop's integrated Firefly generative AI tools in order to get them just right.
- Entrance: where attendees first come into contact with the event space.
- Livestream Zone: Equipped with high-quality microphones and headphones for attendees to live stream performances from various festival stages through the client's music streaming platform.
- Chill-out Zone: A haven for festival goers to relax and unwind, equipped with charging stations, cooling fans, lounge seating, and ambient music streaming in from the platform.
- Interactive VR Booth: An opportunity for festival goers to step into a virtual concert and experience the power of the client's service first-hand.
Step 3: 3D Object Generation
We then moved on to generating 3D objects, focusing first on the entrance way and furniture elements.
Entrance Way
We experimented with multiple tools, including Meshy, LumaLabs, and CSM 3D AI. Here's what we learned:
Meshy
- Pros: Offers both image-to-3D and text-to-3D options.
- Cons:
- Struggled with complex structures when prompting with an image, often simplifying or misinterpreting details.
- Struggled to fully conceptualize detailed text descriptions when prompted with text.
LumaLabs
- Pros: Produced decent results with refined text prompts.
- Cons:
- Low resolution output, even for simple objects.
- Still not great at interpreting text input and translating this into an accurate 3D visual depiction.
CSM 3D AI
- Pros: Allows manual image segmentation, resulting in more accurate models.
- Cons: Requires more user input than fully automated solutions.
Furniture Generation:
We tested Meshy, 3DFY, and a combination of GPT-4 and Meshy.
Meshy
- Pros: Can generate a wide variety of objects.
- Cons: Tends to simplify unique or bespoke elements such as the hay bale base of the this seat.
3DFY
- Pros: Produces detailed, clean 3D object outputs.
- Cons: Limited to standard furniture items, struggles with unconventional materials and shapes.
GPT-4 + Meshy
- Pros: Allows for more nuanced and detailed prompts, resulting in closer matches to desired outcomes.
- Cons: Still faces limitations in texture and lighting.
Step 4: Environment Generation
We explored Skybox AI for creating 360-degree panoramas of festival environments. While not yet true 3D environments, these panoramas offer a glimpse into the future potential of AI-generated spaces.
Step 5: Bringing It All Together in Unreal Engine
Given the current limitations of AI-generated 3D environments, we shifted our focus to Unreal Engine. However, before importing our AI-generated assets into Unreal, we took an important intermediate step: refinement in Blender.
Refining AI-Generated Models in Blender: While the AI tools produced impressive initial results, many of the models required further refinement to meet our quality standards. We used Blender, a powerful open-source 3D creation suite, to:
- Clean up geometry: Smoothing out rough edges and fixing any mesh inconsistencies.
- Enhance textures: Improving UV maps and adding detail to surfaces where AI fell short.
- Optimize for real-time rendering: Reducing polygon counts and creating LODs (Levels of Detail) for better performance in Unreal Engine.
- Add realistic materials: Applying and tweaking PBR (Physically Based Rendering) materials for more authentic looks.
- Rig and animate: Preparing certain objects for animation within the Unreal environment.
This step allowed us to maintain the efficiency of AI-generated models while ensuring they met professional standards for our final visualization.
Assembling in Unreal Engine: With our refined models ready, we moved into Unreal Engine. This powerful tool allowed us to:
- Showcase our AI-generated and Blender-refined objects in a cohesive environment
- Utilize Unreal's automated environment generation tools to create the festival grounds
- Implement dynamic lighting and weather systems for varied atmospheric conditions
- Create an interactive, playable experience that allows users to explore the activation
By combining AI-generated assets, traditional 3D modeling techniques, and the power of Unreal Engine, we've created a visualization pipeline that balances efficiency with high-quality output. This approach allows us to rapidly prototype and iterate on complex experiential marketing concepts, providing our clients with immersive, interactive previews of potential activations.
Step 6: Creating a shareable experience via Pixel Streaming
After creating our interactive experience in Unreal Engine, the final crucial step was to make it accessible to a wider audience or even just to the client we've created it for. This is where pixel streaming technology comes into play.
Pixel streaming allows us to host our Unreal Engine experience on a remote server and stream it directly to users' web browsers. This technology offers several key advantages:
- Accessibility: Users can access the full, high-quality 3D experience without needing powerful hardware or specialized software.
- Instant Updates: Any changes we make to the experience are immediately available to all users, without requiring downloads or updates.
- Cross-Platform Compatibility: The experience can be accessed on various devices, from smartphones to desktops, providing a consistent experience across platforms.
- Reduced File Size: Instead of large downloads, users only need to stream the visual output, significantly reducing bandwidth requirements.
Please note the hosting server for this interactive experience is built to only accommodate a limited number of concurrent users so please check back later if you're not given access at first.
Implementation Process: We set up a pixel streaming server to host our Unreal Engine experience. This involved:
- Configuring the Unreal Engine project for pixel streaming
- Setting up a streaming server with the necessary protocols
- Creating a web interface for users to access the stream
The outcome of this process is a simple, shareable link or embed that allows anyone to access and interact with our 3D festival environment. This link can be easily distributed to clients, team members, or stakeholders, allowing them to explore the concept in real-time, from any location.
Looking Ahead
Our exploration has revealed both the exciting potential and current limitations of AI-powered 3D visualization tools. As these technologies continue to evolve, we anticipate:
- Improved accuracy and detail in 3D object generation
- More sophisticated environment generation capabilities
- Increased integration between AI tools and game engines like Unreal
At Allied Global Marketing, we're committed to staying at the forefront of these advancements. We'll continue to experiment with new tools and techniques, always with an eye toward how they can enhance our agency services and deliver more immersive, engaging experiences for our clients.
While our exploration doesn't cover every tool available, it provides a solid snapshot of some of the most advanced options on the market today. As we move forward, we're excited to push these technologies further, exploring new ways to blend AI-generated elements with human creativity to craft truly innovative experiential marketing solutions.
Go deeper?
Want to learn more about our approach to, delivering compelling and distinctive content that deeply resonates globally - one captivating visual at a time? Get in touch.