AI generated maze
Posted
by
Marisa LaFalce
In College of Human Ecology, Human Centered Design

Karl Lagerfeld sitting in a bathtub full of macaroni and cheese. A village of dripping, floating houses reminiscent of a Salvador Dali painting. An electric-pink 3D labyrinth. These are some of the fantastic images generated by students using artificial intelligence (AI) tools in Juan Hinestroza’s Textiles, Apparel and Innovation course. The fanciful images are inviting to the eye, but are they useful? Do they meet the designer’s vision?

AI in the classroom

Two classes in the Department of Human Centered Design (HCD) are using generative AI tools like ChatGPT, a natural language processing tool, and Midjourney, which generates images from prompts.

“I have been using these AI tools for about one year, and I learn something new every week,” said Hinestroza. “It is essential for our students to be literate in emerging technologies because industry is using them.”

The advent of generative AI has the potential to change education and teaching. In response, the Cornell administration assembled a committee in spring 2023 to develop guidelines and recommendations for the use of generative AI for education at Cornell. Their final report acknowledges that educators need to think about the technologies when planning their classes since students are already using them and expect to encounter them in their future workplaces. The Center for Teaching Innovation (CTI) provides resources and workshops on generative AI that address its potential as well as common concerns like academic integrity, accessibility and ethics. CTI staff attended the final presentations in Hinestroza’s class.

Magnifying Small Spaces

This studio course, taught by Leighton Beaman, associate professor of practice, focuses on the qualities that form our understanding of what it means to be human and the corresponding impact these have on how we design and whom we design for. Students used ChatGPT and Midjourney AI tools to facilitate an iterative design process.

First they trained the AI tools on the properties they wanted to use for their spaces based on three categories: extensive criteria like size and dimension, intensive criteria like the space’s composition and temperature, and affective, or pre-cognitive impact. Next, they worked back and forth between ChatGPT prompts and image outcomes in Midjourney until they yielded expected results. The final project is a culmination of their work, designing a novel space on campus using AI tools with the materials and criteria set by the designers.

“We discovered that getting ChatGPT to understand the properties of the materials we wish to use required nuance,” said Beaman. “The tool uses a natural language model, but we found using quantifiable, algorithmic commands yields better responses.”

For example, when using the adjective “dark” to describe a space, Midjourney returned images representing the human emotion. The generative image tool also returned images that skewed toward trends bias or exhibited bias. “No matter what prompts we tried, any use of the word space contained a floor,” said Alex Xie ‘25. The results were sometimes confounding, such as a celestial image with a tile floor.

Both classes found the AI tools useful for creating quick iterations, such as demonstrating a kitchen layout in multiple color or texture styles for a client.

The designers also learned to be clear and direct with their descriptions. “As designers, we can do better in using language and terminology that is universally understandable,” said Beaman.

Textiles, Apparel and Innovation

In the class, Hinestroza, the Rebecca Q Morgan '60 Professor of Fiber Science and Apparel Design, helps students explore the relationship between materials and design, with a focus on using innovative textile materials for aesthetic and functional purposes. 

The final project required each student to update a design and create a scientific poster using only AI tools. AI helped them synthesize the data efficiently as they based their product on a project that whole groups of students had previously created together for the same class five years earlier. Designs ranged from smart prescription storage to jackets for wheelchair users.

During their final presentations, students highlighted the strengths and weaknesses of AI tools, including Adobe Firefly, Adobe Sensei, Claude.AI and Microsoft Bing — some were better at generating images or sketches, some provided source materials, others did not. Most students had to mix tools to obtain their desired result.

Anna Paaske ’24 found the process of designing a glove for arthritis sufferers challenging. The AI iterations ranged from silly to creepy as the generative tools struggled to create designs that had the appropriate number of fingers in the correct orientation.

AI’s difficulty in generating hands is well known — it became a meme on social media earlier this fall — but it’s part of a theme expressed by students in both courses. The tools are helpful for brainstorming, but if they have a specific vision for a space, object, image or fiber, it was time intensive and frustrating to get the desired result down to the details.

AI as co-pilot

“I think these AI tools are great as a co-pilot. They can offer inspiration to help you do your work, not do your work for you,” said Sarah Kim ’24.

Where do designers fit into the AI equation? “AI is changing rapidly and we must try to keep up,” said Beaman. “We must have perspective of how these tools work, where they are useful, and how (as designers) we fit in.”