In Computational Creativity research, we study how to engineer software that can take on some of the creative responsibility in arts and science projects. In recent years, we have undertaken a practical approach to addressing questions arising in Computational Creativity. In particular, we have built software that can perform mathematical discovery; software that automates cognitive and physical aspects of the painting process; software which helps in game design; and most recently, a corpus-based poetry generator. We have applied our research to projects in automated mathematics, video game design, graphic design and the visual arts. This broad spectrum of applications has enabled us to take a holistic view, and develop various philosophical notions, resulting in a set of guiding principles for the development of autonomously creative software, and a fledgling formalisation called Computational Creativity Theory, which will be the subject of a major EPSRC-funded project that has just started.
In the talk, I will describe our practical applications, the guiding principles and the formalisations. I will then focus on one of the the most thorny issues in Computational Creativity, namely how to assess the creativity of the software we write. We have argued that Turing-style comparison tests are wholly inappropriate in Computational Creativity research, as they encourage pastiche and naivety. We will discuss this issue with reference to The Painting Fool system - which we hope will one day be taken seriously as a creative artist in its own right (www.thepaintingfool.com). Like any other artist, The Painting Fool should be horrified if people confuse its creations with those of someone else - whether human or machine. So…. should we really apply Turing-style tests to this aspiring creative talent?