Abstract

Real-world applications could benefit from the ability to automatically retarget an image to different aspect ratios and resolutions while preserving its visually and semantically important content. However, not all images can be equally processed. This study introduces the notion of image retargetability to describe how well a particular image can be handled by content-aware image retargeting. We propose to learn a deep convolutional neural network to rank photo retargetability, in which the relative ranking of photo retargetability is directly modeled in the loss function. Our model incorporates the joint learning of meaningful photographic attributes and image content information, which can facilitate the regularization of the complicated retargetability rating problem. To train and analyze thismodel,wecollectadatasetthatcontainsretargetabilityscores and meaningful image attributes assigned by six expert raters. The experiments demonstrate that our unified model can generate retargetability rankings that are highly consistent with human labels. To further validate our model, we show the applications of image retargetability in retargeting method selection, retargeting method assessment and generating a photo collage.

Resources
Citation
@article{DBLP:journals/corr/abs-1802-04392,
  author    = {Fan Tang and
               Weiming Dong and
               Yiping Meng and
               Chongyang Ma and
               Fuzhang Wu and
               Xinrui Li and
               Tong{-}Yee Lee},
  title     = {Image Retargetability},
  journal   = {CoRR},
  volume    = {abs/1802.04392},
  year      = {2018},
  url       = {http://arxiv.org/abs/1802.04392},
  archivePrefix = {arXiv},
  eprint    = {1802.04392},
  timestamp = {Mon, 13 Aug 2018 16:47:30 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1802-04392},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}