Tuesday, October 11, 2016

Recency Effect

Higher quality in the end of a video clip leads to higher QoE.

HTTP Adaptive Streaming Standards


  • MPEG DASH Standard [1]
  • 3GP DASH Standard [2]
  • HbbTV DASH Recommendation [3]


A comparison in terms of data description format, video codec, audio codec, format, and segment length can be found in TABLE II of [4].



References
[1] Information Technology—Dynamic Adaptive Streaming Over HTTP (DASH)—Part 1: Media Presentation Description and Segment Formats, ISO/IEC 23009-1:2012, 2012.
[2] European Telecommunications Standard Institute (ETSI). (2009). Universal Mobile Telecommunication System (UMTS); LTE; Transparent end-to-end Packet-Switched Streaming Service (PSS); Protocols and Codecs, Sophia-Antipolis Cedex, France, 3GPP TS 26.234 Version 9.1.0 Release 9.
[3] HbbTV Specification, HbbTV Association, Erlangen, Germany, 2012.
[4] Seufert, Michael, et al. "A survey on quality of experience of http adaptive streaming." IEEE Communications Surveys & Tutorials 17.1 (2015): 469-492.

Saturday, October 8, 2016

VMAF (Video Multi-Method Assessment Fusion)

VMAF (Video Multi-Method Assessment Fusion) is a perceptual quality metric developed by Netflix in collaboration with University of Southern California researchers [1].


References
[1] http://techblog.netflix.com/2016/06/toward-practical-perceptual-video.html


Friday, October 7, 2016

Internet video traffic

Global Internet video traffic accounted for 15 PB per month in 2012, which is 57% of all consumer traffic. By 2017, it is expected to reach 52 PB per month, which will then be 69% of the entire consumer Internet traffic [1].


References
[1] “Cisco visual networking index: Forecast and methodology, 2012–2017,” San Jose, CA, USA, Tech. Rep., 2013.


Tuesday, October 4, 2016

Subjective Tests

  • Subjective Video Quality Assessment Methods for Multimedia Applications [1]
    • Absolute category rating (ACR)
    • Absolute category rating with hidden reference (ACR-HR)
    • Degradation category rating (DCR)
    • Pair comparison method (PC)


  • The method of limits [2]
    • In [3], the authors found the JND (Just Noticeable Difference) and JUD (Just Unacceptable Difference) using this method for mixing video tiles with different resolutions.


References
[1] Subjective Video Quality Assessment Methods for Multimedia Applications, ITU-T Recommendation P.910, April. 2008.
[2] George A. Gescheider. 1997. Psychophysics: The Fundamentals. Psychology Press.
[3] Wang, Hui, Mun Choon Chan, and Wei Tsang Ooi. "Wireless multicast for zoomable video streaming." ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 12.1 (2015): 5.

How to determine the reliability of the subjects?

Before analizing the subjective test results, the reliability of the subjects can be determined using the Cronbach's alpha factor [1].


References
[1]  J. L. Cronbach, “Coefficient alpha and the internal structure of tests,” Psychometrika, vol. 16, no. 3, pp. 297–334, Sep. 1951.

Depth-Image-Based Rendering (DIBR)


  • Depth-based image processing for 3D video rendering applications [1][2]
  • Virtual view synthesis method and self-evaluation metrics for free viewpoint television and 3D video [3]
  • DIBR based view synthesis for free-viewpoint television [4]
  • Free-viewpoint depth image based rendering [5]
  • Free-viewpoint rendering algorithm for 3D TV [6]
  • View generation with 3D warping using depth information for FTV [7]
  • View synthesis with depth information based on graph cuts for FTV [8]
  • Symmetric bidirectional expansion algorithm to remove artifacts for view synthesis based DIBR [9]


References
[1] T. Zarb and C. J. Debono, “Depth-based image processing for 3D video rendering applications,” in Proc. IEEE Int. Conf. Syst. Signals Image Process., May 2014, pp. 215–218.
[2] Zarb, Terence, and Carl James Debono. "Broadcasting Free-Viewpoint Television Over Long-Term Evolution Networks." IEEE Systems Journal 10.2 (2016): 773-784.
[3] K. J. Oh, S. Yea, A. Vetro, and Y. S. Ho, “Virtual view synthesis method and self-evaluation metrics for free viewpoint television and 3D video,” Int. J. Imag. Syst. Technol., vol. 20, no. 4, pp. 378–390, Dec. 2010.
[4] X. Yang et al., “DIBR based view synthesis for free-viewpoint television,”
in Proc. 3DTV Conf., May 2011, pp. 1–4.
[5] S. Zinger, L. Do, andP. H. N. de With, “Free-viewpoint depth image based rendering,” J. Vis. Commun. Image Represent., vol. 21, no. 5/6, pp. 533–541, Jul. 2010.
[6] P. H. N. de With and S. Zinger, “Free-viewpoint rendering algorithm for 3D TV,” in Proc. 2nd Int. Workshop Adv. Commun.,May 2009, pp. 19–23.
[7] Y. Mori, N. Fukushima, T. Fujii, and M. Tanimoto, “View generation with 3D warping using depth information for FTV,” in Proc. 3DTV Conf., May 2008, pp. 229–232.
[8] A. T. Tran and K. Harada, “View synthesis with depth information based on graph cuts for FTV,” in Proc. 19th Korea-Japan Joint Workshop Frontiers Comput. Vis., Feb. 2013, pp. 289–294.
[9] H. Ding, Z. Li, and R. Hu, “Symmetric bidirectional expansion algorithm to remove artifacts for view synthesis based DIBR,” in Proc. Int. Conf. Multisensor Fus. Inf. Integr. Intell. Syst., Sep. 2014, pp. 1–4.