Applying evolutionary algorithms to new problem domains is an exercise in the art of parameter tuning and design decisions. A great deal of work has investigated ways to automate the tuning of various EA parameters such as population size, mutation options, etc. However, genotypeto-phenotype mappings have typically been considered too complex to adapt automatically. We demonstrate a genetic representation learning method that uses meta-evolution to adapt a bitstring encoding for a synthetic class of real-valued optimization problems. The genetic representation we learn performs as well or better than a Gray code both on new instances of the problem class it was trained on and on problem types that it was not trained on.