This article has a neural net designed 30 years ago but is just now gaining wide acceptance for it's breakthrough approach. It's useful to you because it explains how the brain comes up with revolutionary ideas. The creator's neural net has discovered extra hard substances by "thinking" about the possibilities inherent in the materials database it was programmed with, and it has discovered new chemical compounds. It also composes music.
The creator programmed all his favorite tunes in, and out came good music, some of it approaching better than good. It can do all this because it assumes that ideas are just degraded memories which the neighboring neurons find to be interesting. If you push the amount of degradation to it's extreme, you only get nonsense. If you keep it at zero, you only get the status quo.
If you find the sweet spot where the amount of degradation applied leads to revolutionary ideas, then you will need another patent for something "IT" designed to go with the first one you registered for when you built it (this is actually the case for the inventor). If you're a composer, it's absolutely worth your time.
Snippets;
Creativity is perhaps the most celebrated of human capacities, embraced by the human potential movement and revered in the same light as other "folk" attributes such as spirit, soul, and free will. In the objective analysis of creativity, however, we must recognize that much of the grandeur and mystique of this cognitive phenomenon may be no more than a societal judgment that falls far short of established scientific standards.
No longer squinting at the reality, we must account for why human progress is so desultory and why human intellectual activity does not take the most direct deductive path toward a final and ultimate product. Adhering to a reductionist model, we must account for ostensibly breathtaking paradigm shifts and innovations based upon a system of cortical neurons exchanging nothing more than matter and energy with the environment.
Recently I have demonstrated (Thaler, 1995, 1996 a, b, c) that a trained artificial neural network supplied no inputs whatsoever, and driven by random perturbations to its internal architecture may generate valuable ideas related to the conceptual space embodied within the examples it was trained with.
In short, the network is perceiving something when in fact there are no presented environmental inputs. If we were to train a simple auto-associative feedforward net on numerous examples (hence bypassing the tedious Bayesian statistics used to construct this net), setting the inputs of the network to values of zero and then randomly perturbing the connection weights from their trained values, we would observe a progression of network activations corresponding to plausible schemes.
The difference in operating procedure from other work is significant, representing the distinction between perception with its processing of environmental features, and internal imagery with its inherent independence from such external entities.
In Rumelhart’s original work, an associative net is interpreting some partial environmental vector as something it has never seen. In the case of virtual input effect described here, the net is in a state tantamount to sensory deprivation, in effect hallucinating within a silent and darkened room.
The abstract in the link might be dificult to understand but the article itself isn't.
http://imagination-engines.com/iei_semi ... nition.php