CreativeGAN

Editing Generative Adversarial Networks for Creative Design Synthesis

Amin Heyrani Nobari1, Muhammad Fathy Rashad2, Faez Ahmed1

1MIT  2Universiti Teknologi PETRONAS 

The Basics

Below you can find the most up-to-date information on this project:

Understanding Creativity

Creativity is not easily defined, despite many attempts to do so. Sarkar et al. explore the literature surrounding the topic of creativity and definitions of creativity and propose their “common” definition of creativity in design as having “novelty” and “usefulness.” The synthesis of “useful” designs is often correlated to their quality, and their “usefulness” is, therefore, measured based on quality. Engineering design tools are usually built with quality or usefulness in mind and, therefore, focus primarily on this aspect of the process but seldom address novelty.

In this work, we focus our efforts on data-driven methods for design synthesis and propose an approach for guiding existing generative models to synthesize novel designs.

Identifying and localizing novelty

The first step of detecting novelty using SPADE is identifying which samples exhibit the highest overall novelty. The SPADE method relies on features extracted from deep CNN models rather than the images themselves. In our implementation of SPADE we use the Wide Residual Networks50 (WideResNet50) architecture pre-trained on the ImageNet dataset.

To identify and localize novelty in generated designs, we first compute and store the intermediate features of the pre-trained WideResNet50 model, and the globally averaged features of the final layer of the pre-trained WideResNet50 model. To identify the image-level novelty for any given generated design, we use the globally averaged features of the WideResNet50 model for the samples that are being analyzed.

Next, we compute the novelty score for the generated sample using the globally averaged features of the generated sample, and the globally averaged features of the 50 nearest neighbors of the generated sample in the dataset as described by Eq.~\ref{eqn:5}. The resultant score is used as a novelty score of any generated design.

After scoring a set of generated designs on overall novelty and identifying the most novel designs, the next step is to find the features within novel designs that contribute the most to the overall novelty of these designs. To do this, we use a KNN-based approach. However, instead of measuring novelty scores for each design, we estimate the novelty of each pixel in each design. Instead of using the globally averaged features, we use the intermediate features of the pre-trained WideResNet50 model.

Using these pixel-wise features and the pixel-wise features of the 50 nearest overall neighbors found in the process of determining the overall novelty score described above. We compute the novelty score for each pixel in a similar fashion. Then we classify each pixel as belonging to a novel feature if its novelty score exceeds a given threshold. This helps identify a novelty map, highlighting regions within a novel design that are unique compared to other designs.

Guiding GANs towards novelty

The original GAN trained on the data may occasionally create a unique design. However, by identifying what makes the design unique, we can modify the generative model to generate many unique designs with similar features. Our goal is to create an approach that can take novel features synthesized by a generative model and rewrite the generalized rules established by the generator towards synthesizing more samples with the novel feature identified. To do this, we use the GAN rewriting approach introduced by Bau et al. Bau et al. proposed a way to rewrite a GAN model based on a manually identified base image, context images, and a mask that needs to be edited.

In this paper, we aim to build an automated design synthesis model which can give designers numerous novel design candidates without any insight or effort needed from the designer. As a person cannot practically sift through thousands of designs to identify novel designs and attributes, select novel features, and apply them to other generated samples manually. To make rewriting possible in an automated fashion for this application, we introduce an approach to identify the novel features and contextually determine where said features could be applied in other common (i.e., not novel) designs. We do this by identifying which part of a design (of the seven parts of any given bike) overlaps with the novelty feature identified earlier. When applying the rewriting method, we also observed that the generated designs were more realistic when the transfer of features happened from the entirety of parts rather than only a partial segment of parts, which confirmed our intuition.

Generating Novel Bike Designs

In our study, we explore the case study of generating bike designs that exhibit novelty more than it is seen when using styleGAN2 without the creativeGAN editing. Below are a few examples of our results.

Quantitatively Measuring Performance

To show that creativeGAN is capable of generating novel samples, we measure the average novelty score (using SPADE as discussed before) and the SSIM distance (Average of 1-SSIM between generated samples and the dataset). We see that creativeGAN can guide the original GAN model towards more novel samples that are less likely to resemble the original data.

Citations

Chicago

Heyrani Nobari, A, Rashad, MF, & Ahmed, F. CreativeGAN: Editing Generative Adversarial Networks for Creative Design Synthesis. Proceedings of the ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 3A: 47th Design Automation Conference (DAC). Virtual, Online. August 17–19, 2021. V03AT03A002. ASME. https://doi.org/10.1115/DETC2021-68103.

Bibtex

@proceedings{10.1115/DETC2021-68103,
     author = {Heyrani Nobari,
     Amin and Rashad,
     Muhammad Fathy and Ahmed,
     Faez},
     title = {CreativeGAN: Editing Generative Adversarial Networks for Creative Design Synthesis},
    volume = {Volume 3A: 47th Design Automation Conference (DAC)},
     series = {International Design Engineering Technical Conferences and Computers and Information in Engineering Conference},
     year = {2021},
     month = {08},
     doi = {10.1115/DETC2021-68103},
     url = {https://doi.org/10.1115/DETC2021-68103},
     note = {V03AT03A002},
    }

ACKNOWLEDGEMENT

The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper.

CreativeGAN

Editing Generative Adversarial Networks for Creative Design Synthesis

Amin Heyrani Nobari1, Muhammad Fathy Rashad2, Faez Ahmed1

1MIT  2Universiti Teknologi PETRONAS 

The Basics

Below you can find the most up to date information on this project:

Understanding Creativity

Creativity is not easily defined, despite many attempts to do so. Sarkar et al. explore the literature surrounding the topic of creativity and definitions of creativity and propose their “common” definition of creativity in design as having “novelty” and“usefulness”. The synthesis of “useful” designs is often correlated to their quality, and their “usefulness” is, therefore, measured based on quality. Engineering design tools are usually built with quality or usefulness in mind and, therefore, focus primarily on this aspect of the process but seldom address novelty.

In this work, we focus our efforts on data-driven methods for design synthesis and propose an approach for guiding existing generative models to synthesize novel designs.

Identifying and localizing novelty

The first step of detecting novelty using SPADE is identifying which samples exhibit the highest overall novelty. The SPADE method relies on features extracted from deep CNN models rather than the images themselves. In our implementation of SPADE we use the Wide Residual Networks50 (WideResNet50) architecture pre-trained on the ImageNet dataset.

To identify and localize novelty in generated designs, we first compute and store the intermediate features of the pre-trained WideResNet50 model and the globally averaged features of the final layer of the pre-trained WideResNet50 model. To identify the image-level novelty for any given generated design, we use the globally averaged features of the WideResNet50 model for the samples that are being analyzed.

Next, we compute the novelty score for the generated sample using the globally averaged features of the generated sample, and the globally averaged features of the 50 nearest neighbors of the generated sample in the dataset as described by Eq.~\ref{eqn:5}. The resultant score is used as a novelty score of any generated design.

After scoring a set of generated designs on overall novelty and identifying the most novel designs, the next step is to find the features within novel designs that contribute the most to the overall novelty of these designs. To do this, we use a KNN-based approach. However, instead of measuring novelty scores for each design, we estimate the novelty of each pixel in each design. Instead of using the globally averaged features, we use the intermediate features of the pre-trained WideResNet50 model.

Using these pixel-wise features and the pixel-wise features of the 50 nearest overall neighbors found in the process of determining the overall novelty score described above. We compute the novelty score for each pixel in a similar fashion. Then we classify each pixel as belonging to a novel feature if its novelty score exceeds a given threshold. This helps identify a novelty map, highlighting regions within a novel design that are unique compared to other designs.

Guiding GANs towards novelty

The original GAN trained on the data may occasionally create a unique design. However, by identifying what makes the design unique, we can modify the generative model to generate many unique designs with similar features. Our goal is to create an approach that can take novel features synthesized by a generative model and rewrite the generalized rules established by the generator towards synthesizing more samples with the novel feature identified. To do this, we use the GAN rewriting approach introduced by Bau et al. Bau et al. proposed a way to rewrite a GAN model based on a manually identified base image, context images, and a mask that needs to be edited.

In this paper, we aim to build an automated design synthesis model which can give designers numerous novel design candidates without any insight or effort needed from the designer. As a person cannot practically sift through thousands of designs to identify novel designs and attributes, select novel features, and apply them to other generated samples manually. To make rewriting possible in an automated fashion for this application, we introduce an approach to identify the novel features and contextually determine where said features could be applied in other common (i.e., not novel) designs. We do this by identifying which part of a design (of the seven parts of any given bike) has the most overlap with the novelty feature identified earlier. When applying the rewriting method, we also observed that the generated designs were more realistic when the transfer of features happened from the entirety of parts rather than only a partial segment of parts, which confirmed our intuition.

Generating Novel Bike Designs

In our study, we explore the case study of generating bike designs that exhibit novelty more than it is seen when using styleGAN2 without creativeGAN editing. Below are a few examples of our results.

Quantitatively Measuring Performance

To show that creativeGAN is capable of generating novel samples, we measure the average novelty score (using SPADE as discussed before) and the SSIM distance (Average of 1-SSIM between generated samples and the dataset). We see that creativeGAN can guide the original GAN model towards more novel samples that are less likely to resemble the original data.

Citations

Chicago

Heyrani Nobari, A, Rashad, MF, & Ahmed, F. CreativeGAN: Editing Generative Adversarial Networks for Creative Design Synthesis. Proceedings of the ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 3A: 47th Design Automation Conference (DAC). Virtual, Online. August 17–19, 2021. V03AT03A002. ASME. https://doi.org/10.1115/DETC2021-68103.

Bibtex

@proceedings{10.1115/DETC2021-68103,
     author = {Heyrani Nobari,
     Amin and Rashad,
     Muhammad Fathy and Ahmed,
     Faez},
     title = {CreativeGAN: Editing Generative Adversarial Networks for Creative Design Synthesis},
    volume = {Volume 3A: 47th Design Automation Conference (DAC)},
     series = {International Design Engineering Technical Conferences and Computers and Information in Engineering Conference},
     year = {2021},
     month = {08},
     doi = {10.1115/DETC2021-68103},
     url = {https://doi.org/10.1115/DETC2021-68103},
     note = {V03AT03A002},
    }

ACKNOWLEDGEMENT

The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper.