The primary goal of this research is to harness evolutionary algorithms and genomic mappings for the development of efficient, effective, and hopefully sparse deep learning models. This entails two main areas of focus:
- Architecture Optimization for Data Compression
- Identify the most computationally efficient architectures for compressing training data into the model, ensuring perfect recall. This exploration anticipates that different data types or skills might necessitate uniquely optimized architectures.
- Application of Neural Growth and Pruning Cycles
- The methodology involves adding layers, cross-connections, and mutations surrounding the frozen “base-model”, which contains the compressed training data. This model will undergo phased and gated training, initially focusing on data comprehension, followed by cognitive reasoning, and then extrapolation, intuition, and creativity. The