In the last decades, the field of parameterised algorithms, with fixed parameter tractibility (FPT) as its main tool has been provided new methods to analyse old algorithms and design techniques for new algorithms.
The basic idea of parameterised algorithms is that we 'pick' another parameter of our input other than the size (such as treewidth) and design algorithms that are efficient under the assumption that the chosen parameter is a (small) constant.
I wonder if the analysis and design of quantum algorithms can benefit from this approach. Has this been done, or are there good reasons why this is likely ineffective or ignored so far?