The recent surge in AI-driven technologies that turn text prompts into images opened up new possibilities for the creation of bespoke graphics by non-experts. But the output of models such as DALL-E and Stable Diffusion is uniformly two-dimensional.

With a new grant from the National Science Foundation, assistant professor Rana Hanocka seeks to add depth to this new approach to generating and manipulating graphics. The funding, her first from the NSF, will fuel the development of 3DStylus, a suite of tools for editing, transforming, and creating 3D content using only natural language instructions.

Assistant Professor of Computer Science Rana Hanocka.

The project builds upon Hanocka’s previous work developing AI tools that make it easier for users to work with 3D graphics. With Text2Mesh, Hanocka and her team enabled style editing for 3D objects; users can type in phrases such as “colorful crochet candle” or “astronaut horse” to change the texture and color of a 3D model. Now with 3DHighlighter, users can select specific parts of a 3D object – such as the wheels of a vehicle or the feet of a human or animal – using only a text command.

Hanocka’s vision for 3DStylus will expand upon this work into three broader tools: 3D Editor, 3D Morpher, and 3D Creator. The 3D Editor will allow users to modify pre-existing 3D models without changing the base shape, such as changing the color or pattern of the cushions on a chair. With the 3D Morpher, users can make significant changes to the geometry of an object, such as modifying the shape of a chair’s legs or back. Finally, the 3D Creator will produce novel 3D objects based on text prompts, analogous to how the recent wave of generative art AI models work.

Transforming a 3D candle object with the text prompt “color crochet candle,” using Text2Mesh.

“3DStylus has the potential to revolutionize 3D content creation just as DALL-E has already done in 2D – and we aim to go a step further by giving users far greater control over what is generated,” Hanocka said. “The tools in 3DStylus can synergize to enable fine-grained and intuitive control over 3D content creation. An entire 3D model life-cycle can be realized through exchanges between the tools to facilitate new, previously unimagined 3D content. As an example, starting with generating a 3D model through text using the 3D Creator, we could then add text-driven textures via the 3D Editor and update the shape geometry using the 3D Morpher, and continue to iterate and refine the 3D model through this enhanced creative process.”

Each of these tools requires an additional layer of technical work compared to today’s text-to-image models. While the AI underlying DALL-E and its peers can be trained using the massive amount of two-dimensional image data available on the internet, 3D data is far more scarce. To meet this challenge, Hanocka has developed new approaches that use 2D images as a signal for creating and manipulating 3D objects.

“Creating 3D training data is hard,” Hanocka said, “but with the novel approaches we are proposing, we can make significant advancements in the 3D space without needing lots of 3D data.”

By solving this problem and building accessible tools, 3DStylus can fulfill Hanocka’s mission of democratizing 3D graphics and reducing technical barriers so that experts and non-experts alike can more easily create 3D art, animations, and models for industry and engineering. Read more about Hanocka’s work at the website for her research group, 3DL.

Related News

More UChicago CS stories from this research area.
Inside the Lab icon
Video

Inside The Lab: How Can Robots Improve Our Lives?

Oct 27, 2025
best demo award acceptance
UChicago CS News

Shape n’ Swarm: Hands-On, Shape-Aware Generative Authoring for Swarm User Interfaces Wins Best Demo at UIST 2025

Oct 22, 2025
gas example
UChicago CS News

Redirecting Hands in Virtual Reality With Galvanic Vestibular Stimulation: UChicago Lab to Present First-of-Its-Kind Work at UIST 2025

Oct 13, 2025
UIST collage
UChicago CS News

UChicago CS Researchers Expand the Boundaries of Interface Technology at UIST 2025

Sep 26, 2025
child reading to robot
UChicago CS News

Could Robots Help Kids Conquer Reading Anxiety? New Study from the Department of Computer Science at UChicago Suggests So

Sep 10, 2025
UChicago CS News

Hands-On Vision: How a Wrist Camera Can Expand the World for All Users

May 23, 2025
robot interaction
In the News

More Control, Less Connection: How User Control Affects Robot Social Agency

May 16, 2025
collage of photos from conference
UChicago CS News

Innovation at the Forefront: UChicago CS Researchers Make Significant Contributions to CHI 2025

Apr 23, 2025
UChicago CS News

Unveiling Attention Receipts: Tangible Reflections on Digital Consumption

May 15, 2024
UChicago CS News

University of Chicago Computer Science Researchers To Present Ten Papers at CHI 2024

May 06, 2024
UChicago CS News

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023
UChicago CS News

UChicago Computer Scientists Design Small Backpack That Mimics Big Sensations

Sep 11, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube