Work completed for the Large Language Model course as part of the IASD Master's program. We explored the Linformer paper, which proposes a more efficient transformer with an attention mechanism that operates in linear time. We attempted to conduct and replicate some of the experiments described in the paper. In the provided notebook, we compare our Linformer implementation with the vanilla transformer, using configurations such as different sequence lengths and batch sizes.
For more details, please refer to our report.