Abstract
Mechanistic Interpretability aims to reverse engineer the algorithms implemented by neural networks by studying their weights and activations. An obstacle to reverse engineering neural networks is that many of the parameters inside a network are not involved in the computation being implemented by the network. These degenerate parameters may obfuscate internal structure. Singular learning theory teaches us that neural network parameterizations are biased towards being more degenerate, and parameterizations with more degeneracy are likely to generalize further. We identify 3 ways that network parameters can be degenerate: linear dependence between activations in a layer; linear dependence between gradients passed back to a layer; ReLUs which fire on the same subset of datapoints. We also present a heuristic argument that modular networks are likely to be more degenerate, and we develop a metric for identifying modules in a network that is based on this argument. We propose that if we can represent a neural network in a way that is invariant to reparameterizations that exploit the degeneracies, then this representation is likely to be more interpretable, and we provide some evidence that such a representation is likely to have sparser interactions. We introduce the Interaction Basis, a tractable technique to obtain a representation that is invariant to degeneracies from linear dependence of activations or Jacobians.
Overview
-
The paper investigates how degeneracy in neural networks—where multiple parameter sets yield the same functionality—affects the interpretability of network algorithms and proposes methods for mitigating its impact.
-
Using concepts from Singular Learning Theory (SLT), the study quantifies degeneracy and explores internal structures within networks that contribute to reparameterization freedom, such as activation vectors, Jacobians, and synchronized nonlinearities.
-
The authors propose that modular networks, which exhibit higher degeneracy, could be more interpretable by sparsifying interactions and using an 'interaction basis' to eliminate redundant parameters.
Understanding Mechanistic Interpretability and Degeneracy in Neural Networks
Introduction
Mechanistic Interpretability aims to demystify the inner workings of neural networks by understanding the algorithms they implement. Recent research highlights a persistent challenge in this field: neurons within these networks often respond to a variety of unrelated inputs, and the apparent circuits within the models typically lack clear, well-defined boundaries. One underlying factor contributing to this murkiness is the degeneracy of neural networks, meaning that various sets of parameters might achieve the same functionality. This paper investigates the implications of this degeneracy and proposes methods to mitigate its obstructive influence on interpretability.
Singular Learning Theory (SLT) and Effective Parameter Count
In neural networks, degeneracy implies many parameter combinations that yield the same loss. This is especially clear at the network's global minimum where multiple parameter sets might lie within a broad, flat "basin" of the loss landscape. Singular Learning Theory (SLT) helps quantify this degeneracy through the Local Learning Coefficient (LLC).
Key Points:
- Loss Landscape & Degeneracy: Networks with lower LLC values are more degenerate, indicating these networks generalize better and implement more universal algorithms.
- Behavioral Loss: To tackle issues of neuron misalignment in real networks, the authors propose the usage of behavioral loss tailored to the specific network under observation.
- Finite Data SLT: By adjusting the focus from infinite to finite data, the study elucidates how degeneracies account for noisy variations in practical training scenarios.
Internal Structures Contributing to Degeneracy
The researchers identify three internal network structures that can produce reparameterization freedom, effectively lowering the network's effective parameter count:
- Activation Vectors: If the activation vectors in a particular layer do not span the full dimensional space, there will be redundancy.
- Jacobians: Similar to activation vectors, if Jacobians don’t span their full potential space, it results in reparameterization freedom.
- Synchronized Nonlinearities: Neurons that activate synchronously or have synchronized firing patterns add another layer of degeneracy, especially relevant in networks using ReLU activation functions.
Examples:
- Low Dimensional Activations: Linear dependencies within the activation space of hidden layers can result in free parameters that don't contribute to network functionality.
- Synchronized Neurons: If neurons within a layer always activate together, it introduces more redundancy.
Interaction Sparsity from Parameterization Invariance
The authors propose that the degeneracies can lead to more modular and thus more interpretable networks. Greater interaction sparsity can be achieved by choosing a representation invariant to reparameterization:
- Low Dimensional Activations: Sparsify network interactions by identifying and effectively ignoring dimensions with little variance.
- Synchronized Neurons: Further prune interactions by recognizing pairs or blocks of neurons with synchronized firing patterns.
Modularity and Degeneracy
One intriguing hypothesis is that more modular networks exhibit lower LLCs. Independent free directions within each module of the network contribute to a collectively higher degree of freedom when modules are sparsely connected. The notion is that modular structures can potentially possess more degeneracies, simplifying the network's understanding:
- Non-Interacting Modules: Modules that do not share variables maintain independent free directions.
- Interactions Adjusted for Strength: Strength of interaction between modules can be analyzed through a logarithmic measure, helping identify modular structure more accurately.
The Interaction Basis
Finally, the idea of the "Interaction Basis" is introduced to represent neural networks in a form that is robust against reparameterization due to low-rank activations or Jacobians. This involves transforming the basis in each layer to:
- Eliminate Directions with Zero Eigenvalues: Exclude directions connected to zero eigenvalues in both Gram matrices of activations and Jacobians.
- Diagonalize Interactions: Simplify layer transitions to produce bases that align with the network's principle components.
Implications and Future Directions
Practical: The interaction basis technique could potentially sharpen our understanding and visualization of how neural networks process information by reducing redundant parameters.
Theoretical: The findings open avenues for more extensive studies on how network modularity and degeneracy interplay, influencing the ease of network interpretability.
Speculative Future: There is an unexplored potential in applying these techniques to larger, more complex networks, such as state-of-the-art language models, potentially making them more interpretable and efficient.
By carefully examining the internal degeneracies and their implications, this paper takes significant steps toward making neural networks more transparent and interpretable. The full impact of these findings will likely unfold as these concepts are further validated and extended in future research.
Sign up to view Sign up to view this summary
or
Log in