How the Composable Pipelines of Early 3D Software Will Shape the Future of Web Images

imgix logo
by
Chris Zacharias, CEO
September 23, 2024
  |  
3 minute read

As someone who has spent my life immersed in the world of graphics software, I cannot help but draw inspiration from the tools that shaped my early experiences. Boxed software like The Animation Studio by Disney, Ray Dream Studio, 3D Studio Max, and Maya still sit on my office shelf, reminding me of how far we have come. But these tools do more than evoke nostalgia—I believe they have laid the foundation for where modern graphics and AI technology are headed today.

The most striking commonality between these older tools and today’s AI-driven technologies is the concept of node-based architectures. This structure, once revolutionary in early rendering software like Maya, allowed developers and artists to create complex workflows where one operation could affect another, downstream. It is a simple yet powerful idea: chain tasks together in a visual way so that the outputs of one process seamlessly influence the next. In the early days of 3D graphics, this was groundbreaking. Today, it is the bedrock of generative AI workflows.

From Early 3D Software to Modern AI

Tools like ComfyUI, for example, allow artists to stack AI models in a composable pipeline to achieve impressive results, such as generating an image from a text prompt and then refining it using a secondary AI. This workflow feels almost like magic—especially considering the hundreds of millions of dollars in compute time invested into training these models—but it is not a new concept. In many ways, these modern AI tools are an extension of the graph-based systems found in early software like Ray Dream and 3D Studio Max. What is new is the scale of computation and the breadth of possibilities now available.

In my teenage years, working with Ray Dream Studio opened my eyes to the power of node-based programming. It showed me how simple texture operations could be chained together in a visual programming environment. 3D Studio Max then expanded that horizon, allowing node-based workflows not only for textures but for geometry and even physics. Finally, Maya introduced me to the idea of an entire system based on node graphs, with its embedded scripting language (MEL) and programmable subsystems. It was here that I realized the true potential of composable pipelines—not just for rendering but for complex, dynamic environments.

Composability: The Future of Web Image Processing

Node-based logic took root in industries like visual effects, audio processing, and video compositing. But one area where this kind of flexibility has not matured yet is web image processing. That is where I see the most exciting potential. The web, unlike the worlds of VFX or game design, has not embraced the idea of a fully composable image pipeline. This is the next frontier for imgix.

The future of web image processing, in my view, lies in leveraging composable pipelines that allow for non-destructive editing and continuous optimization. Imagine a workflow where you can change an image dynamically—whether it's swapping a background or enhancing image details—without ever damaging the original file. 

This is more than just an improvement in flexibility; it’s about creating a system where images can evolve and improve over time, even as new generative AI models are introduced. By separating components like color, lighting, and depth, and rendering images on the fly, we would maintain the ability to tweak and update images at any point, ensuring they keep pace with the latest AI advancements. 

This shift mirrors what is already happening in computational photography, where data is captured in its rawest form, allowing for endless adjustment while preserving the integrity of the image. Instead of static, pre-baked visuals, this composable approach will enable web images to continuously adapt, improve, and stay relevant in a rapidly evolving technological landscape.

Looking Back to Move Forward

Much of this inspiration comes from looking at where we have been. Those early tools like Ray Dream and 3D Studio Max, some of which are now considered legacy software, laid the groundwork for the future we are building today. Just as those systems shaped the world of 3D graphics and visual effects, they are now influencing how we think about AI-driven image processing for the web.

In short, the future of image processing—especially for the web—will be about harnessing the power of composable, node-based systems. This is not just a technical aspiration; it is a logical next step based on decades of innovation. Just as VFX, gaming, and audio industries have followed node-based workflows, we are now on the brink of applying this same approach to web images, making them more dynamic, customizable, and powerful than ever before. And that is where the real magic lies.