research
          
      
      ∙
      04/13/2023
    RSIR Transformer: Hierarchical Vision Transformer using Random Sampling Windows and Important Region Windows
Recently, Transformers have shown promising performance in various visio...
          
            research
          
      
      ∙
      09/19/2022
    Axially Expanded Windows for Local-Global Interaction in Vision Transformers
Recently, Transformers have shown promising performance in various visio...
          
            research
          
      
      ∙
      06/15/2022
    Self-Supervised Implicit Attention: Guided Attention by The Model Itself
We propose Self-Supervised Implicit Attention (SSIA), a new approach tha...
          
            research
          
      
      ∙
      06/10/2022
    Position Labels for Self-Supervised Vision Transformer
Position encoding is important for vision transformer (ViT) to capture t...
          
            research
          
      
      ∙
      03/30/2022
    ReplaceBlock: An improved regularization method based on background information
Attention mechanism, being frequently used to train networks for better ...
          
            research
          
      
      ∙
      03/24/2022