Modern Kernel Methods
The primary goal of this thesis is to bridge two powerful learning paradigms—kernel methods and deep learning—to design novel neural architectures. Kernel methods offer a strong theoretical foundation, in contrast to deep learning, which often operates as a black-box despite its widespread success in real-world applications. In an era where such opaque models increasingly shape the economy and daily life, it is crucial to develop architectures that combine the theoretical clarity of kernel methods with the empirical effectiveness of deep learning. At present, computer scientists largely lead the charge in advancing state-of-the-art models, while many mathematicians work retrospectively to understand why these methods are so successful. A key motivation for this thesis—admittedly a personal one—is the hope that mathematical theory can not only explain existing architectures, but also inspire the development of better ones from the outset.