Generative Adversarial Network (GAN) models produce a latent space where many new images emerge. These models translate vectors from a latent space of possible designs into actual images, introducing a new degree of variability to the concept of objectile. This research proposes applying a computational aesthetics framework to navigate the latent space and present the designer with new images for feeding their imagination. Theories of parts to whole from aesthetics and cognitive psychology are combined with Birkhoff’s aesthetic measure and computer vision to predict aesthetic preferences and map the latent space.