Computer vision -- ACCV 2020 : 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, revised selected papers

Author(s)

Bibliographic Information

Computer vision -- ACCV 2020 : 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, revised selected papers

Hiroshi Ishikawa ... [et al.] (eds.)

(Lecture notes in computer science, 12625 . LNCS sublibrary ; SL 6 . Image processing, computer vision, pattern recognition, and graphics)

Springer, c2021

  • pt. 4

Other Title

ACCV 2020

Available at  / 1 libraries

Search this Book/Journal

Note

"The Asian Conference on Computer Vision (ACCV) 2020, originally planned to take place in Kyoto, Japan, was held online during November 30 - December 4, 2020."--Preface

Other editors: Cheng-Lin Liu, Tomas Pajdla, Jianbo Shi

Includes bibliographical references and author index

Description and Table of Contents

Description

The six volume set of LNCS 12622-12627 constitutes the proceedings of the 15th Asian Conference on Computer Vision, ACCV 2020, held in Kyoto, Japan, in November/ December 2020.*The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics: Part I: 3D computer vision; segmentation and grouping Part II: low-level vision, image processing; motion and tracking Part III: recognition and detection; optimization, statistical methods, and learning; robot vision Part IV: deep learning for computer vision, generative models for computer vision Part V: face, pose, action, and gesture; video analysis and event recognition; biomedical image analysis Part VI: applications of computer vision; vision for X; datasets and performance analysis *The conference was held virtually.

Table of Contents

Deep Learning for Computer Vision.- In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization.- Exploiting Transferable Knowledge for Fairness-aware Image Classification.- Introspective Learning by Distilling Knowledge from Online Self-explanation.- Hyperparameter-Free Out-of-Distribution Detection Using Cosine Similarity.- Meta-Learning with Context-Agnostic Initialisations.- Second Order enhanced Multi-glimpse Attention in Visual Question Answering.- Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection.- Unified Density-Aware Image Dehazing and Object Detection in Real-World Hazy Scenes.- Part-aware Attention Network for Person Re-Identification.- Image Captioning through Image Transformer.- Feature Variance Ratio-Guided Channel Pruning for Deep Convolutional Network Acceleration.- Learn more, forget less: Cues from human brain.- Knowledge Transfer Graph for Deep Collaborative Learning.- Regularizing Meta-Learning via Gradient Dropout.- Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.- Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed.- Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation.- Double Targeted Universal Adversarial Perturbations.- Adversarially Robust Deep Image Super-Resolution using Entropy Regularization.- Online Knowledge Distillation via Multi-branch Diversity Enhancement.- Rotation Equivariant Orientation Estimation for Omnidirectional Localization.- Contextual Semantic Interpretability.- Few-Shot Object Detection by Second-order Pooling.- Depth-Adapted CNN for RGB-D cameras.- Generative Models for Computer Vision.- Over-exposure Correction via Exposure and Scene Information Disentanglement.- Novel-View Human Action Synthesis.- Augmentation Network for Generalised Zero-Shot Learning.- Local Facial Makeup Transfer via Disentangled Representation.- OpenGAN: Open Set Generative Adversarial Networks.- CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation.- TinyGAN: Distilling BigGAN for Conditional Image Generation.- A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings.- RF-GAN: A Light and Reconfigurable Network for Unpaired Image-to-Image Translation.- GAN-based Noise Model for Denoising Real Images.- Emotional Landscape Image Generation Using Generative Adversarial Networks.- Feedback Recurrent Autoencoder for Video Compression.- MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative Adversarial Network.- DeepSEE: Deep Disentangled Semantic Explorative Extreme Super-Resolution.- dpVAEs: Fixing Sample Generation for Regularized VAEs.- MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network.- EvolGAN: Evolutionary Generative Adversarial Networks.- Sequential View Synthesis with Transformer.

by "Nielsen BookData"

Related Books: 1-1 of 1

Details

  • NCID
    BC06866436
  • ISBN
    • 9783030695378
  • Country Code
    sz
  • Title Language Code
    eng
  • Text Language Code
    eng
  • Place of Publication
    Cham
  • Pages/Volumes
    xviii, 715 p.
  • Size
    24 cm
  • Classification
  • Subject Headings
  • Parent Bibliography ID
Page Top