Reducing an individual's essential facial expressive sentiment could be compared to the artist establishing the range of color needed to capture a scene. They reserve space on their palette for only the colors they need. Could deep learning models use a palette of reduced facial expressive states to train and generate reenacted images portraying an individual's emotion? Mood, audience, feelings, and environment affect and restrain expressions in breadth and intensity, thus simplifying the required expressions in a 'palette' when conveying human, nonverbal communication. After parsing facial video into cropped frames, the findings presented in this research reveal these distinct images can be clustered into groups of facial expressions using unsupervised methods, and assigning a condition are effective to train a deep-learning generative model capable of reenacting a diverse, high quality, palette of human expressions.